AMD: "[Developers use PhysX only] because they’re paid to do it"

It is totally useless to have discussions with some people because they truely want to believe that Nvidia is the devil of GPUs and Intel the devil of CPUs and AMD/ATI the angels over them all.
Ironic post, don't you think, XMAN?

-FUDie
 
No, there were rendering problems with the path.
So instead of fixing the alleged problems, they didn't release the patch. Yeah, that's a good solution.

In any event, you still have responded to TRAOD patch being retracted and the benchmark being removed.

-FUDie
 
Ironic post, don't you think, XMAN?

-FUDie

No, because unlike you and others, I'm not making AMD/ATI out to be devils, where as to you, anything Nvidia does = the evil, even when its proven to NOT be of there doing.

So instead of fixing the alleged problems, they didn't release the patch. Yeah, that's a good solution.

In any event, you still have responded to TRAOD patch being retracted and the benchmark being removed.

-FUDie

Apparently the only people concerned about the removal of a benchmark are ATI fans. Gotta cling to every last shread of anything that shows Nvidia in a negative light in gaming results.
 
So instead of fixing the alleged problems, they didn't release the patch. Yeah, that's a good solution.

Ask AMD. They worked together with ubisoft.

In any event, you still have responded to TRAOD patch being retracted and the benchmark being removed.

-FUDie
I responded to the AC claim. But we can speak about Half-Life 2 and the non existent of the "NV3x path" in the retail version. Where was it?
 
Last edited by a moderator:
No, because unlike you and others, I'm not making AMD/ATI out to be devils, where as to you, anything Nvidia does = the evil, even when its proven to NOT be of there doing.
Actually, you do put down AMD/ATI with every post and put nvidia on a pedestal whereas I don't make out nvidia to be any more evil than they actually are :LOL: But please, feel free to show where I've accused nvidia of anything that wasn't publicly verified.

Here, I'll help:
http://forum.beyond3d.com/search.php?searchid=1310667&pp=25

Oh, and here's a classic too: http://forum.beyond3d.com/showthread.php?p=1360269#post1360269
XMAN26 said:
WOW! Going all the way back to the 5800FX days on that one with 3DMark03 huh, and when was that a game? One where the pic used in the "Shader scape" was shown to be rendered more true on the Nvidia card than ATIs. And yes real world, ATI has long since beaten up Nvidia in the 3D Pro Synthitic Apps, but in the ones used by Pros, it was just the oppisite. 4870 and GTX285was no different, on paper, ATI had the better specs for most things, but yet got beat in actual apps. I doubt that is going to change this time around either.
One more: http://forum.beyond3d.com/showthread.php?p=1339955#post1339955
XMAN26 said:
I still think this launch was a fail on AMDs part.

Guess what? AMD has released 4 chips from the 5xxx series, where's the fail?

You aren't the objective observer you think you are.

-FUDie
 
I responded to the AC claim. But we can speak about Half-Life 2 and the non existent of the "NV3x path" in the retail version. Where was it?
Half-life 2 did have an NV3x path. You don't recall all the PS1.x shaders and liberal use of _pp for the few 2.0 shaders?

Maybe you should provide some source to back up your comment. Here, I'll help: http://forums.hexus.net/graphics-cards-monitors/3367-halflife-2-ati-only-route.html That post links Anandtech: http://www.anandtech.com/showdoc.aspx?i=1862.
Anandtech said:
even with the special NV3x codepath, ATI is the clear performance leader under Half-Life 2 with the Radeon 9800 Pro hitting around 60 fps at 10x7. The 5900 ultra is noticeably slower with the special codepath and is horrendously slower under the default dx9 codepath
So there is a special NV3x path, it's Valve's fault for not making the 5900 faster than it is? Maybe Valve should have tried flat-shaded polygons?

-FUDie
 
Apparently the only people concerned about the removal of a benchmark are ATI fans. Gotta cling to every last shread of anything that shows Nvidia in a negative light in gaming results.
There was plenty to show NV30 in a negative light, but TRAOD was the first game to make use of lots of new DX9 features: SM2.0, 10_10_10_2 render targets, etc. Why wouldn't a gaming enthusiast be interested in benchmarking that? :LOL:

Do work hard to post such ridiculous statements or does it come naturally?

-FUDie
 
Half-life 2 did have an NV3x path. You don't recall all the PS1.x shaders and liberal use of _pp for the few 2.0 shaders?

Maybe you should provide some source to back up your comment. Here, I'll help: http://forums.hexus.net/graphics-cards-monitors/3367-halflife-2-ati-only-route.html That post links Anandtech: http://www.anandtech.com/showdoc.aspx?i=1862.

So there is a special NV3x path, it's Valve's fault for not making the 5900 faster than it is? Maybe Valve should have tried flat-shaded polygons?

-FUDie

You should check the retail version. There is no NV3x path - only DX7, DX8 and DX9.
 
Actually, you do put down AMD/ATI with every post and put nvidia on a pedestal whereas I don't make out nvidia to be any more evil than they actually are :LOL: But please, feel free to show where I've accused nvidia of anything that wasn't publicly verified.

Here, I'll help:
http://forum.beyond3d.com/search.php?searchid=1310667&pp=25

Oh, and here's a classic too: http://forum.beyond3d.com/showthread.php?p=1360269#post1360269

One more: http://forum.beyond3d.com/showthread.php?p=1339955#post1339955


Guess what? AMD has released 4 chips from the 5xxx series, where's the fail?

You aren't the objective observer you think you are.

-FUDie

1. Where has anything been proven other than pure conjecture from Huddy? But then again, in your eyes, if Huddy says it so, it must be all the proof you need.
2. And where in that statement was I wrong?
3. Again where am I wrong? For years ATI has ruled synthitic benches and the performance results there have not translated to gaming performance wins Gen vs Gen. R600 vs G80, R670 vs G92, R770 vs G200. R580 being the lone exception since 04. Before that, R300/350.
4. I still find it a somewhat fail. 5770 slower than previous highend is nothing to be happy about. Hell the 8600GTS was an even worse launch as its performance fell back 2 gens and not even the highend part. 7600GTs were faster. If the mid range for nvidia doesn't atleast equal GTX285, it too is fail. As to why I concider R870 fail, you double nearly everything from R770 and at best its 30% faster than your competitors DX10 hardware overall.
 
Retail version with patches or without? In any event, what's the point of an NV3x path when DX8 works just as well for it? http://www.anandtech.com/showdoc.aspx?i=1863&p=8 Not exactly stellar performance for the 5900 in the mixed-mode.

I know the 5900 is faster than the 4600, so use the DX8 path, get faster performance and be happy.

-FUDie

Now this is cute. Its shown that Valve in conjecture with ATI released a version that hurt a competitor and you are TOTALLY ok with it. But yet, conjecture and speculation of Nvidia doing, even when proven false is, "OMG look what they are doing" whether or not they are doing anything.

And FYI, teh NV3x code path wasn't released as a patch until after NV40 launched. Whats the point releasing it then?
 
1. Where has anything been proven other than pure conjecture from Huddy? But then again, in your eyes, if Huddy says it so, it must be all the proof you need.
Where did I say that? Your statement that "You won't believe AMD until Ubisoft admits to being wrong" is ludicrous because there's no way Ubisoft will do that.
2. And where in that statement was I wrong?
Because you are probably the only person to think this way?
3. Again where am I wrong? For years ATI has ruled synthitic benches and the performance results there have not translated to gaming performance wins Gen vs Gen. R600 vs G80, R670 vs G92, R770 vs G200. R580 being the lone exception since 04. Before that, R300/350.
R300/350 were far faster than anything around. Did you not even look at the Half-life 2 numbers posted a couple posts back? And that's just one game. The only place NV3x did well was Doom3, but I don't think that holds anymore.
4. I still find it a somewhat fail. 5770 slower than previous highend is nothing to be happy about. Hell the 8600GTS was an even worse launch as its performance fell back 2 gens and not even the highend part. 7600GTs were faster. If the mid range for nvidia doesn't atleast equal GTX285, it too is fail. As to why I concider R870 fail, you double nearly everything from R770 and at best its 30% faster than your competitors DX10 hardware overall.
Your lack of logic is astounding. RV770 was slower than GTX 285, right? Where did you think doubling RV770 would put performance? I'll give you a big hint: 5870 is not double 4870 in one big area. But, 5870 is easily double 4870 in many ways, such as shader/compute performance, as long as you avoid the one big thing that's not doubled.

Since when have the next gen's midrange been faster than the previous gen's high-end? Never? I'll draw up a list of ATi's products, you can do the same with nvidia:
High end: R300->350->360->420->480->520->580->600->RV670->770->Cypress
Midrange: RV350->370->410->530->635->730->740->Juniper

RV740 was certainly faster than RV670, but I wouldn't call RV740 a midrange chip. Plus, RV670 was sort of broken.

-FUDie
 
Now this is cute. Its shown that Valve in conjecture with ATI released a version that hurt a competitor and you are TOTALLY ok with it. But yet, conjecture and speculation of Nvidia doing, even when proven false is, "OMG look what they are doing" whether or not they are doing anything.
The difference here is that the NV3x path didn't make 5900 performance competitive. It also didn't add new features (DX9 path was already there). So what is the real point? How does this compare to, say, the AC patch which added DX10.1 features?
XMAN26 said:
And FYI, teh NV3x code path wasn't released as a patch until after NV40 launched. Whats the point releasing it then?
Maybe it was buggy, like the alleged bugs in the AC patch? I don't know, why don't you ask nvidia/Valve.

Last I checked, people were still using NV3x video cards when NV40 was released.

-FUDie
 
Where did I say that? Your statement that "You won't believe AMD until Ubisoft admits to being wrong" is ludicrous because there's no way Ubisoft will do that.

Because you are probably the only person to think this way?

R300/350 were far faster than anything around. Did you not even look at the Half-life 2 numbers posted a couple posts back? And that's just one game. The only place NV3x did well was Doom3, but I don't think that holds anymore.

Your lack of logic is astounding. RV770 was slower than GTX 285, right? Where did you think doubling RV770 would put performance? I'll give you a big hint: 5870 is not double 4870 in one big area. But, 5870 is easily double 4870 in many ways, such as shader/compute performance, as long as you avoid the one big thing that's not doubled.

Since when have the next gen's midrange been faster than the previous gen's high-end? Never? I'll draw up a list of ATi's products, you can do the same with nvidia:
High end: R300->350->360->420->480->520->580->600->RV670->770->Cypress
Midrange: RV350->370->410->530->635->730->740->Juniper

RV740 was certainly faster than RV670, but I wouldn't call RV740 a midrange chip. Plus, RV670 was sort of broken.

-FUDie

And there is no way Huddy could possibly be covering ATIs ass either, give me a break.

6600GT faster than 5950. 7600GT equal to the 6800GT. 8600GTS(failure) slower than 6800GT. 9600GT equal to the 8800GTS(G80) as there were no real G200 midranges and that ones that finially did show up are slower again, they also are FAIL.

PSSSST, This here "R300/350 were far faster than anything around. Did you not even look at the Half-life 2 numbers posted a couple posts back? And that's just one game. The only place NV3x did well was Doom3, but I don't think that holds anymore."

is pretty much what I said here, " Gen vs Gen. R600 vs G80, R670 vs G92, R770 vs G200. R580 being the lone exception since 04. Before that, R300/350" ,but I guess you didn't really read what I wrote tho.
 

The quotes are pretty clearly about the single vendor gpu-physx (where it seems pretty spot on), NOT the much more popular console/cpu physx, but the NV Q/A is conveniently talking about physx in general, so they can leverage on the cpu physx popularity, instead of answering the actual claim; that support for gpu-physx is almost non-existent, unless nvidia pays for it / adds it themselves.
 
Did you think that through? So AMD is going to make chipsets that only work with its graphics cards and graphics cards that only works with its chipsets and in doing so hands the entire desktop PC market over to Nvidia? Nice plan.

Whats to stop amd from making pci-e cards. They made both agp and pci-e cards for years. Why can't they make two again ?

PhysX adds value for Nvidia's customers. What value does a proprietary peripheral interface add when it does exactly the same thing that an existing widely adopted standard does?

If the slot has more bandwidth it would add value there , IT can provide more power through the slot so you don't need as many connectors on the card and so on.


.[/QUOTE]
 
If the slot has more bandwidth it would add value there , IT can provide more power through the slot so you don't need as many connectors on the card and so on.

If I recall correctly, the enthusiast market is not the main point of sale for either NVidia or ATI cards. Instead, it is OEM vendors. As I am sure is clear from Dell's history, OEMs cannot survive on proprietary standards. If ATI was to try and switch to a proprietary pci-e standard, they would lose the OEM market to Intel and NVidia. Because OEM manufacturers buy bulk, they tend to stay away from proprietary formats. I believe it would be near suicide for ATI to try what you are suggesting.

If both Intel and AMD tried to collude together to switch in order to keep NVidia out of the game, we really would see anti-trust law suits as such behavior is clearly illegal. While I know you are just trying to make a point, there is no reasonable situation where AMD and/or Intel could do what you are suggesting.

I really don't understand the point of this argument though. Many computer companies in the past have done this sort of thing. Apple is a huge example. Their hardware was so restricted and proprietary for a time that upgrading was unheard of. Dell only recently dropped its proprietary motherboards and power supplies in its gaming machines. The history of computing is littered with examples. What you will find is that generally one of two things happen. Either A) The technology is so compelling that competing companies come up with their own solutions and eventually an industry standard is forced by larger OEMs/purchasers who don't want to deal with 10 different standards for every part, or B) The technology never expands past "niche" status and eventually fades away.

So far, neither Intel or AMD has been willing to step up to make A happen. Then again, you can make a fairly decent argument that B is happening. I would just let time take its course.
 
"Eidos’ legal department" does not equal Nvidia.

Legal departments don't get involved unless it involves possible legal action by another party. In this case what legal pressure would Eidos' legal department be afraid of enough to actively prevent the removal of a vendorID lock that obviously favors one and only one IHV. And preventing the head of your company as well as the developers themself from doing what they want to do, which again was to remove the vendorID lock.

You are being deliberately obtuse.

And as was pointed out. The AC Dx10.1 patch was work done by AMD devrel. Notice AMD devrel actually does do signficant work (so why wouldn't they have offered help and code for Batman: AA another big title?). But was removed at the behest of Nvidia due to providing a noticeable 20% improvement in framerates that also happen to be backed up by other games using a Dx10.1 path. In other words, the performance increase wasn't due to the bug, but the increased rendering efficiency of 10.1 with regards to AA.

Yet despite the AC Dx10.1 path having no visual anomalies, it was removed and nothing put back to replace it despite AMD being more than willing to help with correcting anything that might be wrong.

So in the end a company arbitrarily decides to screw over it's customers and you actually think there wasn't external pressure from your main co-marketing partner which happens to be negatively affected by performance improvements by their competitors due to having a more technologically advanced (Dx10.1 versus Dx10.0) set of features?

Regards,
SB
 
Last edited by a moderator:
You should check the retail version. There is no NV3x path - only DX7, DX8 and DX9.

And doesn't the nv3x support all of those ?Why would it need a special path ?

Why should a dev focus extra time and money for spcial nvidia paths ? Shouldn't nvidia have made a proper performing dx 9 card instead of having to get developers to program special paths.
 
To be fair, didn't someone hack a HL2 path to use only FP16 or INT16 calcs, and it ended up being faster and looking basically the same as the included HL2 path on NV3x hardware? I suppose this falls under the same category as Humus' Doom 3 hack (texture look-ups -> math) as a case of the dev not wanting to add another path for the sake of ease of maintenance.
 
Back
Top