AMD: "[Developers use PhysX only] because they’re paid to do it"

Concidering Metro 2033 Devs are using the PhysX libraries and have stated THEY DO and WILL use more than 1 core, really flies in the face of your stance that Nvidia is crippling the PhysX libraries for multi core multithreaded capable systems. If they can take the time to make sure it works and impliment it, then so can every other dev out there. The fact that PhysX in other games isn't using more than 1 core in those games point to the Dev not taking the time to impliment multithread PhysX libraries not Nvidia preventing it from happening.

Perhaps because unlike past PhysX efforts in games, this dev might actually be interested in providing a compelling experience for the consumer rather than just more GPU advertisting for Nvidia?

From what they are saying so far, it appears they aren't going to do the "LOOK AT ME I'M SPECIAL" type of GPU PhysX, but rather they are using it to speed up the game and perhaps add a little on top. Instead of almost all past games which used an excessive amount of LOOK AT ME effects with an associated huge hit to performance.

It'll certainly be interesting to see how it turns out. As well whether they use PhysX only for GPU acceleration and go with an in house solution for the CPU side.

Regards,
SB
 
According to the Metro 2033 guys the GPU will only do double what the CPU can do. Take that however you want. I'm assuming they are talking about quad core CPU's when they say that.

Regards,
SB

What I read and Trininboy qouted earlier is that the CPU is going to used for the simpler physics cals while a GPU that can do physics will handle the heavy physics based stuff. Otherwise it will switch to a load balanced code line as to not affect game play for the end user and yet give them nearly the same game experience. IE, they took the time and effort to do things right and not half assed.
 
What I read and Trininboy qouted earlier is that the CPU is going to used for the simpler physics cals while a GPU that can do physics will handle the heavy physics based stuff. Otherwise it will switch to a load balanced code line as to not affect game play for the end user and yet give them nearly the same game experience. IE, they took the time and effort to do things right and not half assed.

I think this is where it comes from:

PCGH: What are the visual differences between physics calculated by CPU and GPU (via PhysX, OpenCL or even DX Compute)? Are there any features that players without an Nvidia card will miss? What technical features cannot be realized with the CPU as "physics calculator”?

Oles Shishkovstov: There are no visible differences as they both operate on ordinary IEEE floating point. The GPU only allows more compute heavy stuff to be simulated because they are an order of magnitude faster in data-parallel algorithms. As for Metro2033 - the game always calculates rigid-body physics on CPU, but cloth physics, soft-body physics, fluid physics and particle physics on whatever the users have (multiple CPU cores or GPU). Users will be able to enable more compute-intensive stuff via in-game option regardless of what hardware they have.

lots of interesting goodies no less
http://www.pcgameshardware.com/aid,706182/Exclusive-tech-interview-on-Metro-2033/News/
 
After further reading, none of the current GPU implementations can do rigid body physics to begin with. This is very likely why there are applications 'in the wild' with the PhysX API that are not using GPU acceleration -- because they're doing rigid body simulations and nothing else.

Rigid body simulations also seem to be the types of simulations that are 'gameplay-affecting' rather than 'eyecandy-only'. So really, we've relegated GPU physics to eyecandy only, which goes to Trinibwoy's point that he's been driving home (and validly so) -- it's just another eyecandy thing, and should be considered personal taste...

And while I agree with that, I still think (as Metro 2033 has demonstrated with their own hard work and time) that these effects really shouldn't be relegated only to those with specific video hardware. And I'd personally like to see my quadcore getting that use rather than my video card.

I'm still very interested in how Fermi affects this entire situation -- from both a performance and other physics simulations capabilities :)
 
Yes, pointing out the blindingly obvious fact that tonally suggesting only one side of the fanboy faction wars would act in such a manner is a clear indication of being one-sided certainly must speak volumes about my own biases.

Yeah, sounds about right, when you don't think about it. :cool:
C'mon John, I even used a smiley and yet you did bite to such an obvious troll of mine! tsk tsk
 
Well, I can't think of any games that received AMD devrel support or assitance having VendorID locked features that were then incorporated into a games DRM in order to make sure there was absolutely no way for an end user to enable said features on a rival card. And then backing it up with a legal contract and pressure from it's legal arm restricting the game dev/publisher from enabling said feature despite the other vendor providing a solution that was remarkably similar to what was locked out via VendorID.

And while AMD devrel may not pay as much money or be as extensive as Nvidia or as public about using it for PR, it does exist.

Although the PR aspect appears to be changing a bit, if only to let people know that it does indeed exist. :D

Regards,
SB
 
Well, I can't think of any games that received AMD devrel support or assitance having VendorID locked features that were then incorporated into a games DRM in order to make sure there was absolutely no way for an end user to enable said features on a rival card. And then backing it up with a legal contract and pressure from it's legal arm restricting the game dev/publisher from enabling said feature despite the other vendor providing a solution that was remarkably similar to what was locked out via VendorID.

And while AMD devrel may not pay as much money or be as extensive as Nvidia or as public about using it for PR, it does exist.

Although the PR aspect appears to be changing a bit, if only to let people know that it does indeed exist. :D

Regards,
SB

And you have proof of that slanderour bite at Nvidia concerning B:AA and the AA code they gave them?


I didn't think so.
 
would be great if both amd and intel stop nvidia gpus from working on their motherboards.

Silly analogy. A common hardware interface like PCIe is in no way comparable to proprietary software like CUDA/PhysX. That's one advantage of a proprietary product - there's no obligation to adhere to any standards since there aren't any. While we're at it why don't we demand that Microsoft make DirectX available in Linux?
 
Silly analogy. A common hardware interface like PCIe is in no way comparable to proprietary software like CUDA/PhysX. That's one advantage of a proprietary product - there's no obligation to adhere to any standards since there aren't any. While we're at it why don't we demand that Microsoft make DirectX available in Linux?

Of course AMD can just remove pci-e 16x and just keep 1x on the boards and launch their own cards with their own proprietary interface.

Wonder how well the gtx 480 would run in a 1x pci-e slot ?
 
Sure they could. All they would need to do is get Microsoft and other OS vendors, along with motherboard manufacturers to hop on board. Not going to happen. Again, trying to find equivalence between proprietary software and standard hardware interfaces is an exercise in futility. The ecosystems are just not comparable.
 
Sure they could. All they would need to do is get Microsoft and other OS vendors, along with motherboard manufacturers to hop on board. Not going to happen. Again, trying to find equivalence between proprietary software and standard hardware interfaces is an exercise in futility. The ecosystems are just not comparable.

Considering AMD makes thier own chipsets for their own cpu allows them to do so. Motherboard makers will do what amd wants other wise they loose busniess. Mobo makers don't make money fro nvidia .
 
Of course AMD can just remove pci-e 16x and just keep 1x on the boards and launch their own cards with their own proprietary interface.

Wonder how well the gtx 480 would run in a 1x pci-e slot ?

That's what PCIe sticker is for. AMD/Intel will get their nuts kicked in court if they tried it.
 
That's what PCIe sticker is for. AMD/Intel will get their nuts kicked in court if they tried it.

Why is that ? The board will still ship with pci-e slots. They will just support a maximum of 1x

Amd and intel arlready have their own sockets that are not compatible with anything else. Why can't AMD say oh we have a hypethread slot esp for our graphics card.
 
Amd and intel arlready have their own sockets that are not compatible with anything else. Why can't AMD say oh we have a hypethread slot esp for our graphics card.

Because only Fanboys will buy such a "closed" system. All the other people will go with a intel system.
Maybe you should join AMD's PR Team...
 
Silly analogy. A common hardware interface like PCIe is in no way comparable to proprietary software like CUDA/PhysX. That's one advantage of a proprietary product - there's no obligation to adhere to any standards since there aren't any. While we're at it why don't we demand that Microsoft make DirectX available in Linux?
It is, however, comparable to :

- disabling SLI on non-nVidia chipsets (oh, wait... that's a common use of a standard hardware interface!)
- disabling "PPU" PhysX when display adapter is not from nVidia (again, common use of a standard hardware interface)

CUDA and GPU PhysX themselves are (almost) perfectly OK, however if they really were acting to improve gaming experience (as stated by one of their PR), and assuming their hardware/software were faster, there would be no reason for them to release these as proprietary solutions.
 
Back
Top