ATI - Full Tri Performance Hit

Status
Not open for further replies.
Yes, theres no point in trying to converse with a relentless ATi drone.
JVD appears to be right up there with Hellbinder as far as blind loyalty / defending goes. :rolleyes:
 
jvd said:
radar1200gs said:
I really wouldn't venture too far down the nVidia paying devs money to show them in a good light and ati in a bad light path if I were you.

You might just force me to bring up the topic of valve, gabe newell, half-life2, 5 million dollars and a shady day event.

Right. Because if ati payed valve to cast nvidia in a bad light there wouldn't be any mix mode paths for the nv3x

...(snip)

Sorry jvd, stop trying to change the subject, you said:
RIght there is alot you didn't address. Like nvidia using money to have developers cast ati in a bad light. They did it before so my point of a doom 3 patch taking away a benchmarking program when ati is ahead or everquest 2 locking out reses for ati users . These are all things nvidia has done with other games and will do again.

Valve were paid $5 million by ATi, Valve did name ATi as their preferred vendor for graphics card for Half-Life2. Gabe Newell did crtiicize nVidia at an ATi event.
 
Valve were paid $5 million by ATi, Valve did name ATi as their preferred vendor for graphics card for Half-Life2. Gabe Newell did crtiicize nVidia at an ATi event.

Cast nvidia in a bad light how ? did they take out a benchmark that showed nvidia beating ati ? did they lock out resloutions for nvidia cards ?

No . They showed perfromance on both cards. Which other dx 9 benchmarks back up. They then went on to write a special path to make the nv3x look better and perform better .

Farcry from anything twimtp partners have done.

As a matter of fact the 5 million was to pay for the vouchers put into ati cards.

So sorry but your wrong radar .
 
radar1200gs said:
Valve were paid $5 million by ATi, Valve did name ATi as their preferred vendor for graphics card for Half-Life2.
And thank GOD they did that, because otherwise you wouldn't have a single chance for a counterstrike (not CS) in this debate.
Would brake my heart to see this happen. ALL PPRAISE VALVE!!! :LOL:
 
I don't believe there is anything out there yet which shows how well the nv40 handles SM 3.0, so until we can see whether it leads to a performance boost via PS 3.0, or better IQ with minimal FPS loss via VS 3.0, I really fail to see what the continued basis is for arguing any of this. ATI might have had very good reasons for holding off on SM 3.0, just like they didn't have a dedicated FP32 path last generation.

We'll just have to wait and see. Having a feature is not the same as that feature being useful, not that feature being a good thing.
 
Isn't it funny how nVidia picked up some of ATi's OpenGL extensions with no fuss, yet ATi can't bring themselves to just use SM2.0a, no they have to have 2.0b, just to be different from nVidia.
radar, 2_a doesn't seem like the small step up 2_b is from 2.0

FP16 is good enough for George Lucas and ILM.
And the relevance is...? nVidia themselves provided examples at NV30's launch where FP16 would display artifacts.

And the consumer certainly hasn't benefitted from high full precsion requirements in DX9, in fact this requirement has arguably delayed the massive uptake of DX9 featured games by 2 years.
In what way?(!) From my perspective, it seems much more plausible that DX9 was delayed because of the GF FX's poor DX9 performance (witness the separate paths for Doom 3, HL2, and Far Cry).

Your just speculating.
ANova, that was point: that we're both just speculating, yet you passed yours off as (more) substantiated.

As I said, SM3 is more of a small update to SM2 then anything else imo.
I think you're wrong. I don't think the ability to move to conditionals is a small leap. Though NV40 may not be giving Intel a run for its money anytime soon, I think moving to a more "general-purpose" GPU is no small step.

I'd say it is wishful thinking to claim that the NV3x delayed DX9 uptake, considering that the majority of graphics cards sold today are integrated Intel graphics processors.
jj, by that logic, we'd only be playing DX7 games. No, Intel doesn't drive the graphics market, it merely solidly establishes the bottom rung. Companies like 3dfx, nVidia, and ATi drive the gaming market. Companies like Intel follow (in huge numbers).

In fact, the NV3x was the first top-to-bottom family of cards with DX9 support, irrespective of the performance limitations.
Oh, so now we're ignoring performance and focusing on paper specs? What happened to the Intel argument? Swap a 5200 for every Intel IGP and we still wouldn't be awash in DX9-laden games.

Quasar, what was I thinking? I can't believe I forgot about the bus width--that definitely makes it close enough to quadrupled for me. I'm actually sorry I made you waste your time correcting me on such a silly oversight. :)
 
radar1200gs said:
Valve were paid $5 million by ATi, Valve did name ATi as their preferred vendor for graphics card for Half-Life2. Gabe Newell did crtiicize nVidia at an ATi event.

Do I really have to get into this. Valve were paid 5 million after shader days. They had no brand loyalty before then, why would they suddenly become fanATIcs overnight unless there was a legitimate reason for it and that reason was shown. Half Life 2 is a very intensive PS2 game, and as we all know (or at least should know) the NV3x has many problems in that department. Nvidia was trying to cover this up by adding optimizations that would typically increase performance 30% or more in exchange for reduced image quality. Why did they do this? Because they are a business, and as such they are in it to make money. If the competition has a better product you have to do something to try and continue selling your products regardless of how good they are in order to at least break even and preferably still come out with a profit. Valve knew what nvidia was doing and since customers would have come complaining to them about performance problems with Geforce FX cards it was their right to make public the truth. Because of that nvidia and Valve no longer like each other, I mean it's obvious why. Valve helped nvidia's competition. ATI realized what was happening and decided to capatalize on it by offering Valve money in exchange for including what is considered to be one of the greatest games of all time with their product. There is no conspiracy to overthrow nvidia Radar, it's simply business. Nvidia screwed up, they tried to cover their asses and were caught with their pants down.
 
Pete said:
I think you're wrong. I don't think the ability to move to conditionals is a small leap. Though NV40 may not be giving Intel a run for its money anytime soon, I think moving to a more "general-purpose" GPU is no small step.

Look, most of what SM3 adds are mere efficiency improvments which may allow for more performance. However, things like dynamic branching will more then likely actually decrease performance unless games use shaders with high numbers of instructions; were talking 200 and above here. By the time this happens the 6800 will run them very slow at any rate, UE3 is a perfect example of this. So right there you can count those enhancments out as advantages. Yes SM3 allows for easier programming, but that's about it. IMO that qualifies as a mi-nute upgrade. Truth is we have yet to see any SM3 demos, even from nvidia. Doesn't this strike you as odd? Why would nvidia decide not to include any demos unless it meant there are no visible advantages to it. I think they used UE3 as an attempt to advertise SM3's importance all the while trying to keep it a secret that Epic used SM2 almost entirely for it.

As for the failure of DX9 uptake. Your only fooling yourself if you think the 9200 is at fault for that. The NV3x had all sorts of problems with DX9, in fact nvidia themselves have told developers to support DX8 instead of 9 for the 5200 because it simply isn't capable of anywhere near playable framerates with it. What's the point in supporting it if you cannot even use it. It just becomes a marketing gimmick. Furthermore, Intel's new integrated graphics support SM2, not 3. To me this says we won't be seeing very much SM3 uptake anytime soon. Especially since about 100 people in the world have an SM3 capable card atm, if that.
 
Pete said:
Isn't it funny how nVidia picked up some of ATi's OpenGL extensions with no fuss, yet ATi can't bring themselves to just use SM2.0a, no they have to have 2.0b, just to be different from nVidia.
radar, 2_a doesn't seem like the small step up 2_b is from 2.0

FP16 is good enough for George Lucas and ILM.
And the relevance is...? nVidia themselves provided examples at NV30's launch where FP16 would display artifacts.

And the consumer certainly hasn't benefitted from high full precsion requirements in DX9, in fact this requirement has arguably delayed the massive uptake of DX9 featured games by 2 years.
In what way?(!) From my perspective, it seems much more plausible that DX9 was delayed because of the GF FX's poor DX9 performance (witness the separate paths for Doom 3, HL2, and Far Cry).

Your just speculating.
ANova, that was point: that we're both just speculating, yet you passed yours off as (more) substantiated.

As I said, SM3 is more of a small update to SM2 then anything else imo.
I think you're wrong. I don't think the ability to move to conditionals is a small leap. Though NV40 may not be giving Intel a run for its money anytime soon, I think moving to a more "general-purpose" GPU is no small step.

I'd say it is wishful thinking to claim that the NV3x delayed DX9 uptake, considering that the majority of graphics cards sold today are integrated Intel graphics processors.
jj, by that logic, we'd only be playing DX7 games. No, Intel doesn't drive the graphics market, it merely solidly establishes the bottom rung. Companies like 3dfx, nVidia, and ATi drive the gaming market. Companies like Intel follow (in huge numbers).

In fact, the NV3x was the first top-to-bottom family of cards with DX9 support, irrespective of the performance limitations.
Oh, so now we're ignoring performance and focusing on paper specs? What happened to the Intel argument? Swap a 5200 for every Intel IGP and we still wouldn't be awash in DX9-laden games.

Quasar, what was I thinking? I can't believe I forgot about the bus width--that definitely makes it close enough to quadrupled for me. I'm actually sorry I made you waste your time correcting me on such a silly oversight. :)

On your FP16 point, first the launch drivers were very immature, secondly thats why FP32 exists, to handle the few situations FP16 can't.

The Dx9 performance is only poor when forced to always work in FP32. FP16 was plenty of precision for the launch of DX9.
 
ANova, I'm discussing the effort involved in architecting a SM3.0-capable core, not how it'll affect the games you're playing now. And I didn't blame the 9200 for anything--you must be referring to someone else.

radar, firstly, please don't quote a big post like that to reply to only one line. Secondly, launch drivers had nothing to do with nVidia's PR material showing FP16 vs. FP32 artifacts. Thirdly, who cares about ILM? If they can use FP16 without artifacting, bully for them. But we're seeing games that artifact at FP16, and that's relevent for me. You say we have FP32 for those situations? Then your ILM example is doubly pointless.

But now you shift AGAIN to saying FP16 has plenty of precision. Look, NV3x is slower than R3x0 even with FP16. Please stop trying to debate something that's shown by the game benchmarks you so treasure above synthetics.
 
Under ideal circumstances the Geforce FX range was supposed to run FP16 by default and FP 32 when needed, Rather than one or the other as a whole affair.

In real World however, Ideals tend to fall short short when a competitive edge is required.
 
In my opinion where most of the controversy (outside of deliberate shortcuts) around DX9 comes from the way Microsoft chose to handle precision.

Rather than a partial precision hint, we should have had a full precsion hint.
 
CitizenC said:
Yes, theres no point in trying to converse with a relentless ATi drone.
JVD appears to be right up there with Hellbinder as far as blind loyalty / defending goes. :rolleyes:
HAHAHHAHHA

I just love it when people try to talk sense with facts, figures, information etc etc etc with some people and they respond with stuff like this. :LOL: :LOL: So apparently i and JVD are have "Blind loyalty" when you and radar are so obviously open minded and ballanced. Founded on sound non PR driven info... :LOL: :LOL:

I would step back if i were you guys and really consider who the "mindless drones" really are. I personally have nothing agianst Nvidia except some of their business practices.
 
radar1200gs said:
In my opinion where most of the controversy (outside of deliberate shortcuts) around DX9 comes from the way Microsoft chose to handle precision.

Rather than a partial precision hint, we should have had a full precsion hint.

AHAHHA
 
radar1200gs said:
In my opinion where most of the controversy (outside of deliberate shortcuts) around DX9 comes from the way Microsoft chose to handle precision.

Rather than a partial precision hint, we should have had a full precsion hint.
I agree with you.

Lets just make FP16 the "minimum" for SM2 (I think SM3 also). And then REQUIRE FP32 for SM4 in DXnext.

It makes a lot more sense than whats going on now.
 
radar1200gs said:
I really wouldn't venture too far down the nVidia paying devs money to show them in a good light and ati in a bad light path if I were you.

You might just force me to bring up the topic of valve, gabe newell, half-life2, 5 million dollars and a shady day event.
I find this point really funny..

Nvidia has a LONG history of paying and or "helping" develpers write code that favors or only runs correctly on their hardware. Even via Device Id detection. Are you really going to wave this ONE example around as ballance or a threat not to speak?

Puhlease... :rolleyes:

Nvidia already pays, has partnerhsips with Epic and Id which there are some OBVIOUS favrotism towards Nvidia going on in both those camps.

As for Valve..

They have recomended Nvidia hardware for years now. Until the sham that was the Nv30 and its siblings,,, and their nearly endless ream of deception, misleading PR, Driver hacks, seriously reduced Qualities etc etc etc..

Which even then Nvidia COULD have paid them 10 or even 50 million for TWIMTBP program partnership. Valve Chose ATi at the time for 5 million becuase of INTELECTUAL HONESTY and INTEGRITY. And the fact that ATi's hardware at that time actually ran the Code PROPERLY and with decent speed.

Which can not be said for id software and the doom Engine's little ARB2/Nv30 path crap.. or his latest "lets change the ARB2 path into the NV30 path and still call it the ARB2 path" trick. Or Epics using PS1.4 for basically NOTHING even though they could have included support that would have benefited performance. Or How about The Tomb raider benchmark strong arm tactic fiasco.. Or GSC game world (stalker) Claiming that their DX9 code runs faster on Nvidia hardware (becuase Nvidia made it for them).. Or the old NWN Water effect issue.. I could go on.

How about you guys try using a little intelectual Honesty yourselves for a change?
 
ChrisRay said:
Under ideal circumstances the Geforce FX range was supposed to run FP16 by default and FP 32 when needed, Rather than one or the other as a whole affair.

In real World however, Ideals tend to fall short short when a competitive edge is required.

That's quite revisionist. It's certainly not how Nvidia pitched the product to the public. In fact they spent quite a lot of time telling people how FP24 wasn't good enough and how you needed FP32, but now you are telling us that FP16 is the default?
 
Seeing as the last two pages have had nothing to do with the actual subject of this thread, can I ask that people return to it? Either that or start the current of debate in a new thread and let this one die.
 
jimmyjames123 said:
We have already been given some insight into why ATI was not able to implement SM 3.0 this generation. ATI's CEO Dave Orton mentioned that an SM 3.0 part would require a significantly larger die size than what is currently had on the R4xx cards, and ATI was unsure about how producible such a design would be given the current processes.

It was never the case that ATi "was not able" to implement ps3.0 in R420. What Orton said was that ATi elected to exclude it, for two reasons:

(1) It required additional circuitry which would have made the chip larger and adversely affected yields

(2) They could not find any performance or IQ value to adding the circuitry which would have justified its expense in terms of yields

Although announced weeks earlier, nV40-based products have yet to ship while R420 products have been shipping for weeks. This is the same kind of elective decision ATi made when it decided to use .15 microns for R3x0 while nVidia opted for .13 microns for nV3x. The practical results of those decisions speak for themselves, don't they?

ATI also did not expect NV to move all the way up to 16 pipelines from 4 pipelines, and they did not expect NV to totally rearchitect their pipelines but rather expected NV to expand on internal processing. Finally, obviously ATI dedicated significant resources to projects like XBOX2. So logically, it appears that a combination of uncertainty about producing the SM 3.0 part, misconception about where NV was headed with their NV4x design, and possibly some issues with resource allocation are some reasons why ATI did not release an SM 3.0 part at this time. Trust me, if they could have released a quality SM 3.0 part at this time, they would have.

So far, there is zero evidence to support the notion that any IHV has "released a quality sm3.0 part" at this time (quite apart from the nV40's mass-market no-show.) There is literally no empirical evidence to support the nV PR claim that nV40 supports ps3.0, either fully or partially, and no evidence to indicate whether that support, if indeed it exists as advertised, is either well or poorly implemented. You would be wise to see the nV40's ps3.0 capabilities demonstrated before making assumptions about it. That's what I intend to do. nVidia's PR department has been notoriously wrong and wrong-headed for almost two years now on many basic issues it has represented publicly.

Remember how nVidia was hyping fp32 at nV30's paper launch in late '02? Remember how David Kirk was making public proclamations to the effect that "96-bits is not enough"? I remember that clearly as well as many other things, like the nV30 product abort, and the fact that by the end of '03 the song Kirk was singing was "64-bits is quite enough" and "96-bits is way too much"....:D You'd be wise to recall nVidia's PR pattern over the last two years concerning "new features," and try to learn something from them. They are revealing as to the disconnect between the products nV makes and the things its PR people say about those products in public. Disconnect might be too mild a word for it though--maybe its better to think of it as a divide, or schism, gulf, etc....;)

The reality is that ATI still is studying the NV40 architecture, and trying to learn and understand more about it. With features like superscalar architecture, FP16/FP32, full support for SM 3.0, FP16 texture filtering and frame buffer blending, dedicated on-chip video processor, the NV40 has a general featureset that the entire industry is moving towards. The NV40 is clearly more of a forward-looking architecture, and only the most hardened of fanboys would argue against that notion.

To coin a phrase, it's not he who looks to the future first who wins, but he who gets there first...;) Strangely, among all your bizarre notions of "forward-looking" you aren't able to look at the present clearly enough to see that R420 has been shipping for weeks and nV40 has yet to appear in the distribution channels. The future is a phantom, in other words, and is anything but fixed. Your assumptions themselves are "forward looking" in that they assume facts not yet in evidence. My position is that we wait for the day when nV40 products are shipping into the mass-markets in quantity, and we take a street card with its current drivers and look under the hood to see which of nVidia's PR prophesies about it are correct, and which aren't.

Also, I think the nV40 architecture itself, at least theoretically, is proof of how hard nVidia's "been studying" R3x0 since 8/02...;) One benefit of that study, obviously, is that nVidia's no longer pushing ps1.x as the "future" of 3d gaming, and has stopped looking backwards to that extent--so at least R3x0 has turned them around and pointed them in the right direction, if nothing else.

I'd say it is wishful thinking to claim that the NV3x delayed DX9 uptake, considering that the majority of graphics cards sold today are integrated Intel graphics processors.

Then apparently it's clear that looking ahead and imagining fictional scenarios is your forte', since you can't see the present, much less the recent past...;) FYI, the integrated graphics market isn't the same market as the 3d-gaming market--they are quite distinct.

From quitting the FM program last year over ps2.0 and making a notorious stink about it, to strong-arming EIDOS to trash its own software's in-game benchmark publicly and remove it from the game TR: AoD simply because it revealed the poverty of nV3x's DX9 feature support far too clearly to suit nVidia's tastes, to everything else nVidia did last year, it's abundantly clear that nVidia fought DX9 API feature support --specifically ps2.0--tooth and nail. You'd have to have been living under a rock to have missed it.

There isn't too much to say, really. Developers are embracing this new technology as we speak. SM 3.0 not only adds efficiency with respect to performance but also efficiency with respect to coding. Most people would consider that to be a good thing, and a step in the right direction.

Let's see, firstly, we have no idea as to what sort or quality of ps3.0 support resides in nV40, so we don't know if the nV40 implementation of ps3.0 is worth supporting or not. One strike. Then, there are no nV40's in circulation (let alone enough of them) to make it worthwhile for developers to support ps3.0, even should the nV40 implementation of it be quite robust. Two strikes. Last, we're still waiting on M$ to figure out nV40's ps3.0 implementation well enough to develop, test, and finalize 9.0c so that developers will have something in the D3d API that they can support in their software. Three strikes, you're out...:D

I want to leave you with a final thought on the present: nVidia's long been strong on releasing its own in-house demos to demonstrate the features it wishes to promote, and features not being supported in D3d has never stopped them before, as often they'd do OpenGL demos using their own custom extensions prior to that feature support making its way to D3d. So where are the nVidia demos proving to the world how much better their nV40 ps3.0 support is when contrasted to ATi's lowly ps2.0b+ support? We haven't seen them yet, have we? No OpenGL or D3d demos from nVidia demonstrating nV40's ps3.0 prowess and capability. Not a single one. Since nVidia's PR people are pushing ps3.0 like there's no tomorrow, I find that at least odd, if not telling. Could it be that nV is pushing ps3.0 for nV40 like it pushed fp32 for nV30 at the paper launch for nV30? Time alone will answer that as the future is anything but written, despite what nVidia's PR people would wish you to believe. In the present, the questions far outweigh the answers.
 
What a load of tripe!

NV40 is available as we speak. People own them & can purchase them.

As for nVidia's SM3.0 demos, they are coming rest assured of that, but, there is little point in making them available before DX9.0c is available.
 
Status
Not open for further replies.
Back
Top