Anti-competitive Actions (PhysX) by Nvidia - class action?

I'm sure its against the law to do so. After all MS had a monoply in the OS market and used that to push out web browser competitors.
What rpg is trying to tell you is that you need to be found a monopolist in a court of law first, before certain monopoly rules apply.

As long as this is not the case, the rules are much, much looser. Ergo, it's pretty much certain that it's not against the law.

With ~30-something % of the GPU market, you'll have a hard time indeed to first prove that they are a monopolist.
 
I was under the (false?) impressing that TWIWMTBP meant nvidia subsidized. Is that not correct?
I don't know. It's what Ati constantly claims and Nvidia constantly denies.
In the basic case the data is actually transferred to the host system then back to the GPU ;) That's how PhysX works. However Nvidia has some hacks to avoid that, but I'm not sure they used these in games.

Anyway everything has to be transferable to support PhysX on a dedicated GeForce.

The support stuff reason is bullshit.
Please explain? What's the basic case? With Ageias original implementation, it sure was/is they you're describing it. But why would Nvidia do physics on their GPUs, transfer the result back to the CPU and then again back to the GPU for rendering? I don't see any reason for that.

Indeed, which is why the VW analogy was a bit flawed too..
Ok, then let's drop silly car analogies. :)

Well, they do that all the time, but with their compilers. FTC and AMD don't like it very much.
Right, but Intel has about 75-80 percent market share in x86 CPUs, whereas Nvidia has well below 30 percent in graphics (at least i blelieve that).

Those numbers don't really mean anything. A lot small devs use PhysX because it's free, and the number of AA titles using it is not that impressive. Batman: AA is the poster child of PhysX.
Sure, but then: Look at who does physics on the konsoles - where the vast majority of sales happens. At least it's no Nvidia GPU.

IMHO it would be foolish to swap your perfectly functioning ATI card for an NV part just to get more eye candy in a few titles.
Agreed.
If NV wants to pull a Glide let them, all this will be history when DirectCompute/OpenCL physics solvers hit the market.
I've been hearing about "other" GPU-utilizing physics solutions since when - beginning of 2006? That was when GPU-havok was all the hype with AMD AND Nvidia. And yet, all those open stuff has yet to prove to be more than smoke screens and nice papers and tech demos.

It's basically the same as with open standards compute. Whereas vendor specific APIs like CAL/Stream and Cuda where building utilization of GPU ressources for several years now, the open standards stuff has yet to ship/just begun to ship (a single) product(s). It's either wait endlessly or go for it your own way. The same with Glide and other specific APIs back in the day: If we had waited for DX-titles, we'd been set back a few years.
 
Please explain? What's the basic case? With Ageias original implementation, it sure was/is they you're describing it. But why would Nvidia do physics on their GPUs, transfer the result back to the CPU and then again back to the GPU for rendering? I don't see any reason for that.

Based on cuda docs, nv can't do gpu<->gpu dma transfers. It has to be rerouted via cpu.

Right, but Intel has about 75-80 percent market share in x86 CPUs, whereas Nvidia has well below 30 percent in graphics (at least i blelieve that).
Which is why FTC takes a dim view of Intel's actions, but not nv's.

I've been hearing about "other" GPU-utilizing physics solutions since when - beginning of 2006? That was when GPU-havok was all the hype with AMD AND Nvidia. And yet, all those open stuff has yet to prove to be more than smoke screens and nice papers and tech demos.
Give it as much time as Physx has had so far dear. Physx will ultimately be even smaller footnote in the history of computing than CUDA if nv doesn't mend it's ways.

It's basically the same as with open standards compute. Whereas vendor specific APIs like CAL/Stream and Cuda where building utilization of GPU ressources for several years now, the open standards stuff has yet to ship/just begun to ship (a single) product(s). It's either wait endlessly or go for it your own way. The same with Glide and other specific APIs back in the day: If we had waited for DX-titles, we'd been set back a few years.
And yet, proprietary standards come crashing down all the time. IMHO, 2012 is when the game changes in favor of open compute.
 
Based on cuda docs, nv can't do gpu<->gpu dma transfers..
How do they do SLI then?
Give it as much time as Physx has had so far dear. Physx will ultimately be even smaller footnote in the history of computing than CUDA if nv doesn't mend it's ways.
How much older than mid-2006 is Physx? I honestly don't know.

But you're exactly making the point, I will refer to further down:
And yet, proprietary standards come crashing down all the time. IMHO, 2012 is when the game changes in favor of open compute.
Of course they are. But they tend to pave the way and bridge the time until an open standard is finally ready.
Without the prorpietary stuff - and that's my personal take on this - we'd be waiting for open standards even longer, not to mention for products based on them.
 
How do they do SLI then?
They composite the frames over the SLI bridge. AFAIK, they can't do DMA's over pci-e.

How much older than mid-2006 is Physx? I honestly don't know.
I meant physx on gpu's.
Of course they are. But they tend to pave the way and bridge the time until an open standard is finally ready.
Without the prorpietary stuff - and that's my personal take on this - we'd be waiting for open standards even longer, not to mention for products based on them.
Yup.
 
They composite the frames over the SLI bridge. AFAIK, they can't do DMA's over pci-e.
AFAIK, they cannot do PCIe-DMAs at all without stalling the chip (one of the great new things in Fermi, but also one AMD is capable of since a long long time). If that inability holds true for SLI-DMAs I do not know. But anyhow, I think this is not necessarily the same as sending all the stuff via Chipset/CPU to the other card (be it GF or Radeon).

I meant physx on gpu's.
Ok - fair enough. But then the other argument takes over, that you'll have to wait longer without the proprietary trailblazers.
 
AFAIK, they cannot do PCIe-DMAs at all without stalling the chip (one of the great new things in Fermi, but also one AMD is capable of since a long long time). If that inability holds true for SLI-DMAs I do not know. But anyhow, I think this is not necessarily the same as sending all the stuff via Chipset/CPU to the other card (be it GF or Radeon).
IIRC, they can do cpu->gpu asynchronously with compute since g92. gt200 added gpu->cpu with compute, but was still half-duplex. Fermi adds full duplex dma async with compute capability.

OT: At this rate, I wouldn't be surprised to see a few arm cores and 10 GbE on Fermi 3 :LOL: NV is literally growing an entire ecosystem on the other side of PCIe bus.
 
Ok - fair enough. But then the other argument takes over, that you'll have to wait longer without the proprietary trailblazers.

Not necessarily. Many of the things pushed in DX updates aren't always first done through GPUs hardware.

GPU's certainly didn't push accelerated shader based AA resolve. MS pushed the standard to open things up more for Devs. ATI followed while Nvidia resisted (R600 vs G80). Somethings are co-developed. It's hard to say whether programmable shaders was pushed by GPU IHVs or by MS/Software devs.

Had MS pushed GPU physics through DX, adoption would have been universal and implementation follow shortly after. As such while Ageia might be credited with starting the ball rolling back in 2002, MS (Direct Compute) and the Kronos Group (OCL) will have a far larger hand in driving mass adtoption of physics acceleration.

Without some sort of standardized interface/API for these sorts of things it was never going become the next "big" thing. IE - no repeat of the 3dfx + Glide phenomena as hardware accelerated PhysX just hadn't had as noticeable an impact on increasing the user experience as 3dfx + Glide did in completely revolutionizing 3D accerlation on PC in such a noticeable way that not only did it hugely increase IQ, but also hugely increased rendering speed.

If Nvidia doesn't work to make PhysX vendor agnostic then they'll be lucky if in 3 years time it's as relevant to PC gaming as Creative's EAX is currently. Interesting parrallel. EAX quickly gained dominance in PC game due to being an open standard when it was first introduced (Version 1 and 2). And it's downfall quickly started after it closed it off and made it proprietary (Version 3 and 4) which coincided with DX offering better 3D audio positioning.

Regards,
SB
 
Last edited by a moderator:
Not necessarily. Many of the things pushed in DX updates aren't always first done through GPUs hardware.

GPU's certainly didn't push accelerated shader based AA resolve. MS pushed the standard to open things up more for Devs. ATI followed while Nvidia resisted (R600 vs G80). Somethings are co-developed. It's hard to say whether programmable shaders was pushed by GPU IHVs or by MS/Software devs.

Had MS pushed GPU physics through DX, adoption would have been universal and implementation follow shortly after. As such while Ageia might be credited with starting the ball rolling back in 2002, MS (Direct Compute) and the Kronos Group (OCL) will have a far larger hand in driving mass adtoption of physics acceleration.

Without some sort of standardized interface/API for these sorts of things it was never going become the next "big" thing. IE - no repeat of the 3dfx + Glide phenomena as hardware accelerated PhysX just hadn't had as noticeable an impact on increasing the user experience as 3dfx + Glide did in completely revolutionizing 3D accerlation on PC in such a noticeable way that not only did it hugely increase IQ, but also hugely increased rendering speed.

If Nvidia doesn't work to make PhysX vendor agnostic then they'll be lucky if in 3 years time it's as relevant to PC gaming as Creative's EAX is currently. Interesting parrallel. EAX quickly gained dominance in PC game due to being an open standard when it was first introduced (Version 1 and 2). And it's downfall quickly started after it closed it off and made it proprietary (Version 3 and 4) which coincided with DX offering better 3D audio positioning.

Regards,
SB

Sorry, but i don't follow your logic. Shader-based aa resolve was a result of free programmability on GPUs and can be done as well by R600 and G80 alike - as the Call of Juarez Benchmark-Demo showed.

EAX was, AFAIR, never an open standard. Creative Labs only gave away free licenses for older versions after they had moved on.

And please note: I've never asserted that proprietary stuff is here to stay, but rather that they pave the way until open standards take over - and that is accelerated by the existence of proprietary stuff.
 
Sorry, but i don't follow your logic. Shader-based aa resolve was a result of free programmability on GPUs and can be done as well by R600 and G80 alike - as the Call of Juarez Benchmark-Demo showed.

With much publicized complaining by Nvidia that hardware resolve wasn't used, even going so far as to release a public PR statement that there was no reason to do shader based resolve when G80 supported hardware resolve. Conveniently glossing over the fact that shader based resolve in that demo provided more accurate AA.

EAX was, AFAIR, never an open standard. Creative Labs only gave away free licenses for older versions after they had moved on.

Here's one of the first result I got in Google. (http://www.dansdata.com/MX300.htm ) This article was written while EAX 2.0 was still in developement.

The SBLive supports DirectSound 3D and EAX, but not A3D 1 or 2. The MX300 doesn't yet support EAX, but we're promised it will with an upcoming driver release; the SBLive will never support A3D. A3D is Aureal's proprietary standard, which only cards based on their designs can use; EAX was created by Creative Labs, makers of the Sound Blaster line, but it's an open standard and anyone can make gear that uses it.

Yes, it was an open standard up until 3.0. No licensing was required prior to 3.0, free or otherwise. And continues to be an open standard free of licensing. 3.0+ however remains proprietary as Creative decided at that point to leverage EAX more heavily to promote its own hardware. The net effect being that it has basically killed EAX in the process.

And please note: I've never asserted that proprietary stuff is here to stay, but rather that they pave the way until open standards take over - and that is accelerated by the existence of proprietary stuff.

I guess we'll have to agree to disagree. It sometimes follows that model, and sometimes it doesn't. Considering it's taken 8 years for vendor agnostic hardware physics acceleration to appear (still in infancy), it's had plenty of time to establish itself and create a large and thriving ecosystem. Since that hasn't happened in any appreciable way, I wouldn't exactly say that proprietary systems have done much to significantly accelerate the process.

I do somewhat agree on seeding the idea however. I'd counter it was inevitable that GPU compute would lead to hardware physics acceleration, but there's no way to prove that, so we'll just have to disagree on that subject.

But we do agree that vendor agnostic systems (Open is a bit misleading as OpenGL (open) has for the most part failed in the face of DirectX (closed) but both are vendor agnostic) will in general replace proprietary systems as long as they don't stagnate.

Regards,
SB
 
With much publicized complaining by Nvidia that hardware resolve wasn't used, even going so far as to release a public PR statement that there was no reason to do shader based resolve when G80 supported hardware resolve. Conveniently glossing over the fact that shader based resolve in that demo provided more accurate AA.
Right you are!

Here's one of the first result I got in Google. (http://www.dansdata.com/MX300.htm ) This article was written while EAX 2.0 was still in developement.
-> "with a future driver version will also support DirectSound EAX (Environmental Audio eXtensions)." [my bold]
I'm still not convinced.

Yes, it was an open standard up until 3.0. No licensing was required prior to 3.0, free or otherwise. And continues to be an open standard free of licensing. 3.0+ however remains proprietary as Creative decided at that point to leverage EAX more heavily to promote its own hardware. The net effect being that it has basically killed EAX in the process.
If you say so… but I rather believe Vista with it's lack of support for EAX (or Creative with it's lack of provding a satisfactory solution to the problem) did more than it's fair share to kill EAX.


Considering it's taken 8 years for vendor agnostic hardware physics acceleration to appear (still in infancy), it's had plenty of time to establish itself and create a large and thriving ecosystem. Since that hasn't happened in any appreciable way, I wouldn't exactly say that proprietary systems have done much to significantly accelerate the process.
Well, the physx ecosystem is definitely more pronounced than any other system utilizing GPUs for physics calculations. I think that rather proves my point.

Granted, it's far from being pervasive, but - as a german proverb goes: The one-eyed is king among the blind [literally translated].
 
Last edited by a moderator:
But we do agree that vendor agnostic systems (Open is a bit misleading as OpenGL (open) has for the most part failed in the face of DirectX (closed) but both are vendor agnostic) will in general replace proprietary systems as long as they don't stagnate.
No. I consider both OGL and DX to non-proprietary standards.

Due to the nature of sw patents, I am hesitant to call either an open standard. Undisclosed business practices (they may be reasonable or not) behind DX's evolution muddy those waters even more.
 
-> "with a future driver version will also support DirectSound EAX (Environmental Audio eXtensions)." [my bold]
I'm still not convinced.

Bah stop making me look this stuff up. That was due to time required to implement EAX in their drivers. :)

Here's a PDF that was released with EAX 2.0 introduction by Creative. (http://www.atc.creative.com/algorithms/eax20.pdf ) It stats explicity that it's an open standard. :)

As an open standard, EAX works not only with Creative’s cards, but with any
manufacturer’s cards that care to take advantage of the EAX property sets.

If you say so… but I rather believe Vista with it's lack of support for EAX (or Creative with it's lack of provding a satisfactory solution to the problem) did more than it's fair share to kill EAX.

It was already gasping for breath and desperate to garner developer support during EAX 4.0 lifetime. And virtually dead by the time EAX 5.0 was launched as well as the X-Fi series of soundcards. All this predates Vista by quite a bit.

During EAX 4.0 timeframe they were already desperately trying to get developers to continue implementing EAX (much less EAX 4.0) rather than just using standard multipositional Directsound. And those still bothering to use EAX mostly stuck with 1.0 or 2.0.

I know as I was a pretty big fan of EAX and not only 3D positional audio but environment audio affects and material effects on sound. I'm still pretty sad that this aspect of computer gaming has died. Gamers are much weaker, IMO without audio advancing in step with graphics.

Granted, it's far from being pervasive, but - as a german proverb goes: The one-eyed is king among the blind [literally translated].

Hehe, I like that quote. :)

Regards,
SB
 
Last edited by a moderator:
Bah stop making me look this stuff up. That was due to time required to implement EAX in their drivers. :)

Here's a PDF that was released with EAX 2.0 introduction by Creative. (http://www.atc.creative.com/algorithms/eax20.pdf ) It stats explicity that it's an open standard. :)
Sounds a lot like what Nvidia's saying about physics, though. ;)


Granted, it's far from being pervasive, but - as a german proverb goes: The one-eyed is king among the blind [literally translated].[/QUOTE]

Hehe, I like that quote. :)

Regards,
SB[/QUOTE]
Always happy to shed some joy into this world. :)


Cheers,
Carsten
 
Back
Top