NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
Ask about Batman and the "8800+ nVidia MSAA only" please too, seems to work fine if you rename it to UT3 and force AA through CCC...but if you enable it through some .ini editing the game refuses to run saying you don't have supporting hardware.

Little things like that make me a wee bit leery of nVidia and their ability to "play nice". ;)
 
This will be focused on PhysX. I have a limited amount of time and I want to focus on the issue I actually scheduled the call for. Either way. Despite my disconnect with some of your guys opinions. I do appreciate your taking the time to give them to me. I knew Beyond3d would not be an "easy" feedback enviroment. But I did get some good feedback on things and there are issues I do agree with you guys on.

Hopefully with all this time and energy spent, Something will come of it. At the very least some information.
 
Last edited by a moderator:
Instinct Technology has collaborated with Dark Water Studios to create a movie showing how their middleware product Instinct Studio is able to use NVIDIA’s CUDA technology to offload AI from the CPU onto the GPU of the graphics card.
= late response to ATI Froblins.
Planes flying around is cooler looking than froblins though :cool:
 
This will be focused on PhysX. I have a limited amount of time and I want to focus on the issue I actually scheduled the call for.
C'mon, it's one simple question that shouldn't take all that long and it has been drawing lots of interest! Besides, isn't Batman like their premier PhysX title now? It segues nicely and it'll cover your bases and not make you look like you're just doing a softball fluff piece. :yep2:
 
Come on guys. I'm doing this in the hopes I can get some questions/answers as well as make PhysX better for people by providing feedback. I have to schedule a call like this ahead of time. Theres only so much time for talking. And I have a specific agenda which I'm not even sure I'll have enough time to cover entirely. In order to get another shot at this. I'll have arrange another conference call.

The guys at Nvidia are busy. This isnt like something where I call them up and go "Hey lets have a beer and talk about PhysX and other randomness". I specifically scheduled appointment time with PhysX portions of Nvidia's team. Even if I asked that question they probably wouldn't have a good response anyway.. This is about getting information/feedback to Nvidia regarding PhysX. As well as getting more information to where I can answer questions about it more accurately for the community at broad. I am not writing any sort of article. I am only trying to benefit Nvidia with broad types of feedback and be benefitted by their perspective on these types debates. As well as hope for making an impact on some of their future decisions regarding PhysX.

There really is nothing nefarious about what I am trying to do. This is no different than my feedback work for such things as SLIAA/Nvidia thermal management. But the issue is just bigger.
 
So, how long before OpenCL is "mature"? Or, how long before OpenCL is dropped in favour of ...

OpenCL is mature as soon as people start releasing drivers. Which is basically now.

Also, separately, I guess NVidia's strategy with PhysX, now that it is fully proprietary, is to get game developers to build gameplay physics, not just effects physics. Totally cutting off non-NVidia gamers. I wouldn't be surprised if Futuremark's game, whatever it's called, goes that way.

good luck with that. Nothing like cutting out 90% of the TAM.
 
You missed the point Neliz. What prevents AMD from porting PhysX to OpenCL? Assuming AMD has licensed PhysX instead of Havok. Havok is Propietary technology that AMD licensed. Havok is not an open Standard. It is run and driven by Intel. Havok/PhysX are both closed standards Run by 2 competing companies. ((Intel/Nvidia)). You have to license either of them.

You are missing timelines there Chris. ATI was working with havoc for quite some time now on accelerating it. In addition, looking at the timelines, havoc was pretty much the only game in town and has a MUCH higher installed base.

And neither you nor I know what contracts are in place between ATI/Havoc or when they were signed.
 
In Order to use PhysX AMD has to license it. Just like Havok. And yes there is driver side work AMD would have to do to make PhysX work for them. Hell they are porting the entire Havok Library to make it work for them in OpenCL.

actually, Chris, unless nvidia uses some proprietary extension, Physx ported to CL will run on any device with a CL driver stack. That is one of the points of CL.

And they aren't porting the entire Havok library to make it work for them in OpenCL. They are porting it to the CL runtime. There is an important distinction there.
 
Why Dont you ask Intel that? Whether ATI Division had a "Good" Relationship with Intel is really besides the point. ATI Division is just as much Intel's competitor As Nvidia is. And Intel is just as cut throat as you guys believe Nvidia is. Intel is guiding force behind Havok and they control its direction. AMD is Busy porting Havok to OpenCL.

you do realize that Intel/AMD have a LONG history of coopitition right?

Havok is not Open Source, Havok is not free to use. The question is redundant because AMD has already made this compromise. They just compromised with intel. Not Nvidia. If Nvidia Compromised. And let AMD port PhysX on their own to OpenCL with a license. How would this be any different? Or would this not be enough?

Actually, they worked with Havoc, pre Intel buyout of Havoc.

Ok. I get it. AMD doesn't want to use CUDA. And PhysX is on CUDA. But it doesn't neccasarily have to be and could possibly change in the future. I have seen nothing that suggests this is a completely closed door. And AMD has even spoken of the possibility of future talks with Nvidia an extremetech article posted back in March. I am in good faith here trying to listen to the arguments of why you guys dislike PhysX and what can be done to improve the situation. But some of the comments here have me scratching my head.

Physx currently is CUDA only. The only ones that can port it to a non-proprietary language are Nvidia.

The primary reason I dislike Physx atm, is the only thing it can accelerate is minimal side candy. Its basically useless as it stands as a path to more complex integrated physics within games. I've played games with gpu physx enabled and disabled and the only real effect it has is reduction in frame rate. You pretty much have to go out of you way to use it, and it can only be used in a non-interactive way currently because it is a closed middleware with a small marketplace. The only thing that will change that is nvidia porting it to a non-proprietary runtime.
 
The primary reason I dislike Physx atm, is the only thing it can accelerate is minimal side candy. Its basically useless as it stands as a path to more complex integrated physics within games

The second part is probably true. The first part is not true. PhysX/Havok are very similar in overall capability. Yes devs are somewhat limited by the fact that they can't do much besides "Add on". But its also entirely possible to do these effects. But personally I reiterate that I think its a good thing that AMD users are denied half the game because its using GPU PhysX by Nvidia.

you do realize that Intel/AMD have a LONG history of coopitition right?

Cooperation when its within their interest. AMD and Intel also have a long history of legal battles, PR wars ((much Like Nvidia/ATI PR fights)), fights over X86 licensing and where that boundary ends. Much of this cooperation exists purely because of cross licensing agreement that Intel is stuck with in regards to AMD. A license agreement that I'm positive Intel would like nothing more than to see an end of. AMD didnt give intel 64 bit extensions out of the goodness of their heart. They gave it to them because the cross license agreement calls for this kind of thing.

This "ATI" made AMD and Intel friends because of Havok just doesn't fly with me. I have seen nothing to suggest that. And I certainly dont see why Intel would be going out of its way to support AMD in its GPU physics department.

You are missing timelines there Chris. ATI was working with havoc for quite some time now on accelerating it. In addition, looking at the timelines, havoc was pretty much the only game in town and has a MUCH higher installed base.

And neither you nor I know what contracts are in place between ATI/Havoc or when they were signed.

HavokFX is Dead. Intel buried. Its pretty much irrelevant at this point.

Before I go any further with this argument. I need to know a few things.

1) Do you have any links to AMD progress with their OpenCL port of Havok? Besides that nearly 6 month old demo of the red dress? What new has been talked about regarding Havok? Because if theres something new I'd like to see it. As I am interested.

2) I'm gonna refrain commenting on AMD's alledged OpenCL Havok driver until it publically exists. Its incredibly hard to argue the merits/cons compared to PhysX until it actually is there.

actually, Chris, unless nvidia uses some proprietary extension, Physx ported to CL will run on any device with a CL driver stack. That is one of the points of CL.

Yes but will it run optimally? There is still work that would have to be done. In all honesty its probably moot. The impression I got tonight is AMD has no interest in PhysX. Which is the impression I had before. And even if Nvidia extended their hand to help AMD promote PhysX on AMD hardware. AMD wouldn't want that anyway from a business perspective its an Nvidia technology. Just like its not in Nvidia's interest to build AMD a PhysX driver. Its not really in AMD"s interest to optimise or promote PhysX at all. Conflicts of Interest. However Nvidia does still insist that if AMD approached them and was interested in licensing their technology and using their methodology. They'd help them do so. But again moot. Unlikely to happen.





Moving onto the seperate issue. My first thoughts on my conference calls.

Anyway, First part of my call focused on CPU PhysX and how performs and how its optimised.

1) There are levels of "PhysX" which can be implemented. Take a look at the plane demo mentioned above. You can accelerate so many particles/destructible objects. But the math isn't really on the side of the CPU past a certain point.

Yes you could probably get a bit higher performance on a more multi threaded CPU PhysX implementation. But this partially a developer concern. A dev would have a choice to implement "Softer PhysX" so to speak. less collisions, less destructible particles, less fancy effects. But technically Possible. There are tons of games with Minor PhysX effects that run just off the CPU right now.

Use Cryostasis as an example. It has "Advanced PhysX" and Normal "PhysX" both can be run on the CPU. Normal non advanced PhysX are no where near as fancy but run on the CPU fine. The Advanced PhysX runs on the CPU too. But the performance isnt there.

PhysX "Can" be multi threaded. Theres not an inherit limitation to the PhysX library that prevents optimal CPU usage. Take a look at 3dmark Vantage which is properly threaded. So this is up to the developer. There are also tons of games which use PhysX middleware for pretty simple Physic effects right now.

For the more advanced effects devs probably don't care to do so because they cant the performance for the effects Nvidia is pushing. ((nvidia is pushing maximizing particle usage/collision/ ect)) and these types of PhysX effects using the Physic Library are streamlined to run optimally on the GPU. ((Also they get alot of Nvidia devrel love in making it run as fast as possible on the GPU) Where as the dev would have do CPU optimisations themselves. ((Which Nvidia wouldn't be too interested in helping with)) Doesn't fit their business model. Though its technically supportable.

So as far as PhysX CPU performance goes. If its basic Physic with minimal destructable objects/particles/occlusion. You should run just fine. (See Cryostasis non advanced Physics)).

For the more fancy things. Unless the developer bothers ((IE comes up with a more middle way between the crazy amount of GPU particles and the limited effects)). ((IE see the plane demo as an example of math throughput)). You probably aren't gonna see much. It just means more work for the dev.

In Short: I would not expect to see major performance gains for upcoming PhysX titles leveraging GPU PhysX on the CPU. As a matter of fact. I actually anticipate fully leveraged GPU physX to get even tougher on the GPU. I think you'll be surprised on Batman's recommended GPU for PhysX. Though I can't say this for certain yet without testing it. ((I will be getting a copy)) I expect Batman to be one of the first games that a dedicated GPU for PhysX might actually be ideal due the extreme PhysX workload thats supposedly coming for it.

Davros <-- I saw some amazing stuff with Batman. Stuff that makes Cryostasis and Mirrors Edge look silly. I can't speak for everyone. But I was truly impressed by some of the final levels.

I have more to talk about but I probably won't be back until sometime after the weekend.
 
Last edited by a moderator:
aaronspink said:
The primary reason I dislike Physx atm, is the only thing it can accelerate is minimal side candy
Unlike, say, what the GPU does the rest of the time? Heh.
 
Unlike, say, what the GPU does the rest of the time? Heh.

The GPU is accelerating the WHOLE scene that is a CORE part of the gameplay and experience. Because of the severe limits with GPU accelerated physics, GPU physics at best accelerates nebulous side candy like random fabric, etc, that is of little consequence and has no effect of game play. In all honesty, you are currently better off turning GPU accelerated physics off and enhancing the general visual settings of the games.

The honest truth is that game physics hasn't really progressed one bit from HL2, and I'd dare say that HL2 has more physics involved than the vast majority of current games out there. This will like stay true until widespread accelerated physics becomes a reality.
 
The GPU is accelerating the WHOLE scene that is a CORE part of the gameplay and experience. Because of the severe limits with GPU accelerated physics, GPU physics at best accelerates nebulous side candy like random fabric, etc, that is of little consequence and has no effect of game play. In all honesty, you are currently better off turning GPU accelerated physics off and enhancing the general visual settings of the games.

The honest truth is that game physics hasn't really progressed one bit from HL2, and I'd dare say that HL2 has more physics involved than the vast majority of current games out there. This will like stay true until widespread accelerated physics becomes a reality.


hmm it really depends on what you are looking for gpu physics to do for you. Can you give me a list of things you would want to see in future games. The more intensive physics effects that I'm thinking of has alot of it has to do with the game design and not just eye candy, and it will make game design much more difficult, so there has to be limitations on what can be done. Level design would have to be re-thought as well which will inturn effect the AI.
 
How about the truth

heres a question you can answer "Do you personally think nv told the batman devs to deny msaa to ati hardware"

I don't think it's that simple, or that the game devs exclude hardware in quite that way. I think what happens is that Nvidia writes code, gives it to the game devs, but retains the copyright and only gives the code as is, with the device id checks to only enable it to run on Nvidia hardware.

So Nvidia says "This is our code, we've spent our resources writing it for you to only run on our hardware, you can use it in your game, as long as we still own it and you don't change it".

Nvidia have gone from writing code to benefit a game, to writing code to benefit their own hardware. I'm not surprised.
 
The real question seems to me not to be what Nvidia has or has not done. It's why ATI hasn't done the same. They could've worked with the Batman developers?
 
Status
Not open for further replies.
Back
Top