NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
It's not possible with the current generation hardware. Try traversing a linked list on GPU and CPU. There's a huge performance difference.

It sure is possible on current GPU hardware, and you would not use a linked list (a linked list is a painfully bad data structure for coherency and parallel processing on the CPU as well).

Programming for a GPU is the same as programming for a cluster. It's all mature, but the hardware is slightly different.

No it is quite different. Programming a cluster is typically all about message passing and limiting latency (network bottleneck). GPU programming currently (at least single GPU) doesn't involve any exposed network message passing.

Synchronization is anathema to these architectures, period.

The problem of synchronization is larger in the CPU space do to how the operating systems schedule threads. If you need a real example of this, think about the typical latency of threaded job result usage in games (latency is large to avoid stalling at sync points were dependent results are needed from asynchronous job invocations). This latency for some developers is an entire game frame (id for example). Others have more dependent operations happening in one frame, however nothing even close to the number of dependent draw calls you can issue on the GPU in a given frame.

The only difference is that said systems were expensive and rare, hence fewer programmers.

And as being so expensive and rare, their use was towards solving a tiny subset of easy-to-parallelize problems.

My point here is simple, when provided with two different tools to solve similar problem, for example a lock pick and a sledge hammer attempting to break through a door, you wouldn't attempt to use the pick like a sledge hammer, nor a sledge hammer like a lock pick. Both however can be used successfully to solve the problem.

People often only think of using simple serial problem solving techniques (like linked lists for example) to solve problems, when in-fact there is a large expanse of solutions to problems. It isn't hard to see the connection here when you think about how your "general purpose" C/C++ program is in-fact running on a statically defined rigid highly parallel connection of transistors known as a CPU...
 
You're missing my point. PhysX is not an integral part of a game, but an optional one, because it will not run on 100% of their target market. If you want software to be used by game developers they need to know it will not limit their addressable market.

I dont really see this as missing the point. How would you feel about a game that doesn't run at all? Right now PhysX adds to by visually adding to the game but by taking nothing away from gameplay by not having it. It can be turned off. It adds extra and takes nothing away. Its far better than nothing. And definately better than a game not capable of running without it.

Nobody has suggested that games all of a sudden "Stop Working" because of PhysX. I think we can all agree that a game should be able to function and be playable on all pieces of hardware. PhysX is an addon. And I think should be treated as such.

That's when physics on the GPU will start to matter.

When someone has the time and charity work to make it happen. But why does it matter so much who makes it? Be it Intel/Nvidia/AMD? If Nvidia was behind Havok, the chances of them leveraging the same kind of effects and GPU power is very unlikely.

Havok though, runs on any CPU - whether it's attached to an ATI GPU, an Intel one or an NV one. So a developer can count on it working. It may not always use the GPU, but it will function.

PhysX Runs on the CPU too whether attached to an intel/Nvidia/AMD GPU. May not be the best performance. But it will function. Would you rather it not if the performance is alot slower? CPUS ((Like GPUS will always get faster,)) ((Curious onto your viewpoint on this))

If NV was smart, they'd get PhysX to run on ATI cards.

So Nvidia should build a CUDA/OpenCL physX driver for AMD GPU hardware out of the goodness of their heart? I guess Intel should do the same? Are you gonna use the same idealism when Intel releases its GPU which will in all likelyness accelerate Physics through their GPU as well? Do you honestly believe Intel isn't going to leverage their GPU design for things like GPU computing such as Physic workloads where its beneficial for them to do so? ((once again I am curious to your answer here))

GPU computer for consumers is irrelevant and I doubt it's made them any money worth talking about. Who says "I'm buying an NV gpu because of physX" - nobody sane. If NV has a better card, they'll buy NV, if ATI does, they'll buy ATI. I doubt physx makes a significant difference.

Buying a GPU just because of PhysX? Or making a choice impacted by having PhysX as an available option. I think you'd be surprised. Features are Features. And features sell. And PhysX is a feature for Nvidia hardware right now and might just the tip the scale in a purchase decision.

Some people buy cards purely for folding at home performance. Some people buy for 8x MS. Some people buy for the heatsink color to match their windowed case. Your opinion does not exactly reflect everyone's opinion on this matter. Nor does mine. Anyway. I'll have to pick up this convseration tommorrow. While I dont share your opinion. I am interested in hearing your viewpoint. Paticularly regarding your opposition to PhysX. Negative viewpoints allow me to communicate with Nvidia and I'm trying to organize a conference call with them regarding consumer feedback of PhysX.

I'm not just interested in an argument here. But a reason why you seem so adverse to GPU Physics from Nvidia. But I'll be up front an honest here. "Because it doesn't run on AMD GPUs" is probably not gonna be very helpful. Specifically the questions I asked in the first 2 quotes are of paticular interest to me.
 
Last edited by a moderator:
Since nobody else has posted it, Charlie posted a grim outlook for NV.
Seems somewhat excessively pessimistic to me.

The roadmap and it's (intended?) shortcomings have been discussed at great length in the GT300 topic and how CJ posted the exact same info last week
 
When someone has the time and charity work to make it happen. But why does it matter so much who makes it? Be it Intel/Nvidia/AMD? If Nvidia was behind Havok, the chances of them leveraging the same kind of effects and GPU power is very unlikely.

PhysX Runs on the CPU too whether attached to an intel/Nvidia/AMD GPU. May not be the best performance. But it will function. Would you rather it not if the performance is alot slower? CPUS ((Like GPUS will always get faster,)) ((Curious onto your viewpoint on this))

If NV wants to make physx on gpu an integral and standard part of the gaming experience (thereby increasing gpu computer usage for consumers), then it needs to work for all gamers. Half the gamers buy ATI cards, so it needs to work on ATI cards.

So Nvidia should build a CUDA/OpenCL physX driver for AMD GPU hardware out of the goodness of their heart?

Porting to OpenCL would be smart, an alternative would be to work with AMD to enable them to support PhysX (and in time Intel). Hint: OpenGL and DirectX beat glide for a reason.

I guess Intel should do the same? Are you gonna use the same idealism when Intel releases its GPU which will in all likelyness accelerate Physics through their GPU as well? Do you honestly believe Intel isn't going to leverage their GPU design for things like GPU computing such as Physic workloads where its beneficial for them to do so? ((once again I am curious to your answer here))

NV and Intel are in different positions. NV is a leader, Intel is a new entrant. Industry leaders often need to do things which benefit their competitors in order to move the whole industry forward. Look at how much AMD has benefited from PCI and PCI-Express, infiniband, etc. etc.

If Intel wants to make something an integral part of the gaming experience they will need to do the same (get it to work for most of the industry).

Intel already understands this - they tried to twist SW developer's arms on IA64 and that didn't work well. They understand you have to woo and appeal to the SW devs self interest.

Some people buy cards purely for folding at home performance. Some people buy for 8x MS. Some people buy for the heatsink color to match their windowed case.
Your opinion does not exactly reflect everyone's opinion on this matter. Nor does mine. Anyway. I'll have to pick up this convseration tommorrow. While I dont share your opinion. I am interested in hearing your viewpoint. Paticularly regarding your opposition to PhysX. Negative viewpoints allow me to communicate with Nvidia and I'm trying to organize a conference call with them regarding consumer feedback of PhysX.

I'm not just interested in an argument here. But a reason why you seem so adverse to GPU Physics from Nvidia. But I'll be up front an honest here. "Because it doesn't run on AMD GPUs" is probably not gonna be very helpful. Specifically the questions I asked in the first 2 quotes are of paticular interest to me.

It has nothing to do with running on AMD GPUs, it has everything to do with running on ALL GPUs used by gamers. That some of those GPUs are AMD ones is merely co-incidental.

Imagine for a minute that PhysX ran on all GPUs used by gamers. Then a software developer could assume very high levels of performance and make it a requirement. That would push the adoption of gpu computer far faster than making it a mere optional shiny feature.

Imagine a game that required you to destroy terrain to get past key parts, and that the realistic physics was an essential element of game play - that's what this would enable and I think that's what NV really would like to see.

David
 
If NV wants to make physx on gpu an integral and standard part of the gaming experience (thereby increasing gpu computer usage for consumers), then it needs to work for all gamers. Half the gamers buy ATI cards, so it needs to work on ATI cards.

Well, that depends. If NVIDIA hopes that PhysX would be adopted to enhance "real" physical simulation (ie. game mechanics changing), then it's better not just working on NVIDIA's hardware because no game developers are going to use that for that purpose to be tied with a specific GPU vendor. On the other hand, all current PhysX GPU enhanced effects are just cosmetics, so it can be seen as an additional value instead of something essential. This way, it can be working for NVIDIA only, because if you buy ATI's card you just don't see the special effect, but it doesn't affect gameplay. This is nothing more than, say, buying a faster card so you can enable shadows. Though, some may find that to be quite shallow.

Porting to OpenCL would be smart, an alternative would be to work with AMD to enable them to support PhysX (and in time Intel). Hint: OpenGL and DirectX beat glide for a reason.

Unfortunately, porting to OpenCL is not enough. Unless AMD and NVIDIA are going to converge their GPU architecture in the future (this is, well, not that unlikely though), it still has to be optimized for both architecture. Because, something runs very well on NVIDIA's GPU, even written in OpenCL, could run very bad on AMD's GPU, and vice versa.

NV and Intel are in different positions. NV is a leader, Intel is a new entrant. Industry leaders often need to do things which benefit their competitors in order to move the whole industry forward. Look at how much AMD has benefited from PCI and PCI-Express, infiniband, etc. etc.

If Intel wants to make something an integral part of the gaming experience they will need to do the same (get it to work for most of the industry).

It'd be nice if they did that. For example, if they decided to open CUDA for C or even propose it as a standard, then there wouldn't be the need for OpenCL at all. It's the same for PhysX. For example, if they opened PhysX API and maybe even open source it (the CPU codes only will suffice), and make it royalty free, it could put a very serious pressure on Havok.
 
PhysX Runs on the CPU too whether attached to an intel/Nvidia/AMD GPU. May not be the best performance. But it will function. Would you rather it not if the performance is alot slower? CPUS ((Like GPUS will always get faster,)) ((Curious onto your viewpoint on this))
Is this really true? When I see PhysX on/off videos (like a recent Batman video) it doesn't show the same effects with increased/decreased performance, it shows the effects missing on the CPU version.
 
Hint: OpenGL and DirectX beat glide for a reason.


David

Ah, OGL and DirectX beat out Glide for 2 reasons. First and formost, because of very poor execution by 3DFX to get products to market in timely manner and secondly, any game that was writen for glide first almost always needed multiple patches to get it to work in OGL and/or DX. UT needed 48 patches before OGL and DX both worked decently and UT was writen for Glide first and OGL and DX as an add on/after thought.
 
Is this really true? When I see PhysX on/off videos (like a recent Batman video) it doesn't show the same effects with increased/decreased performance, it shows the effects missing on the CPU version.

To see PhysX run by the CPU, you need to either trick the system into thinking no Nvidia card is installed or run the game using an ATI card.
 
To see PhysX run by the CPU, you need to either trick the system into thinking no Nvidia card is installed or run the game using an ATI card.
Under what setup do they get the version that doesn't show the effects? Or is this just a hack for illustration purposes?
 
On the other hand, all current PhysX GPU enhanced effects are just cosmetics, so it can be seen as an additional value instead of something essential.

cosmetic and in most of the cases I've seen, bad. Sorry, I've played several of the physx signature titles and the visual enhancement offered by gpu acceleration is really any good. To even notice it, you have to specifically play to show it.


Though, some may find that to be quite shallow.

Well it is quite shallow.


Unfortunately, porting to OpenCL is not enough. Unless AMD and NVIDIA are going to converge their GPU architecture in the future (this is, well, not that unlikely though), it still has to be optimized for both architecture. Because, something runs very well on NVIDIA's GPU, even written in OpenCL, could run very bad on AMD's GPU, and vice versa.

could but it would run a hell of a lot better than it does now. Realistically, for physics acceleration to be successful, it will have to be OpenCL based. Optimization between various architectures can be handled by the various vendor specific layers that will be implemented to support opencl.

Nvidia's fear should be that Havoc gets ported to OpenCL and optimized by Intel/AMD for their hardware leaving them out in the cold. If Havoc runs well on AMD/Intel, that's pretty much 90%+ of the market which is fairly significant.



It'd be nice if they did that. For example, if they decided to open CUDA for C or even propose it as a standard, then there wouldn't be the need for OpenCL at all.

too late. CUDA is dead already, just sitting around in the hospice ward atm. Nvidia had their chance to make it an open standard and didn't. OpenCL already has more momentum than CUDA has. All the vendors are behind it in the GPU realm and many out of it as well.

It's the same for PhysX. For example, if they opened PhysX API and maybe even open source it (the CPU codes only will suffice), and make it royalty free, it could put a very serious pressure on Havok.

possibly, but I don't know if its in nvidia's nature to do anything like that. And even then, I'm not sure it would be successful vs havoc which can always do the same thing.
 
I have a conference today, so I dont have much time. But I do wanna point something out. I see that you want OpenCL PhysX. I dont think thats "out of the question". But it still doesn't gaurentee AMD ((or Intel)) or anyone else will leverage PhysX.

AMD had the same oppurtunity to license PhysX as they did Havok ((And they did license havok)). With regards to AMD's case. Intel is focuses on CPU Physics for the time being. So for the time being AMD isn't pressured to do anything specific with Havok other than make sure it run/compatible on the CPU. Both companies would have too license it and build their own drivers to support it. I think its unreasonable to expect Nvidia to build the drivers. But OpenCL port of PhysX. I get that.

Also. On the subject of "CPU" PhysX. All PhysX games that I know of ((with the exception of GRAW)) have the ability to turn/disable PhysX in game. In order to disable GPU accelerated PhysX you just switch it off in control panel for Nvidia. The "Side by Side" comparisons that you often see are with PhysX disabled from within the game.

Is this really true? When I see PhysX on/off videos (like a recent Batman video) it doesn't show the same effects with increased/decreased performance, it shows the effects missing on the CPU version.

While I havent tested the new batman game. I should be getting a copy soon. Yes it is true with the games I've spoken about ((Mirrors Edge, Cryostasis, UT3, and Sacred 2)) GRAW 2 being the exception. Disabling GPU accelerated PhysX in the control panel does not disable PhysX. It simply sets the CPU as the device which handles PhysX. PhysX effects ((and whether PhysX is used is toggled in game menus. I just played Cryostasis and Sacred 2 with the CPU doing the PhysX processing. Its alot slower. But it does work. GRAW 2 will not allow you turn on PhysX effects without GPU accelerated PhysX enabled.

Its ultimately the developer who decides if PhysX is used or not. Nvidia control panel just determines what device accelerates it. Which is why all these games have it toggleable from within the game.
 
Last edited by a moderator:
I have a conference today, so I dont have much time. But I do wanna point something out. I see that you want OpenCL PhysX. I dont think thats "out of the question". But it still doesn't gaurentee AMD ((or Intel)) or anyone else will leverage PhysX.

But that's the thing, they don't have to do anything to leverage it if it is an OpenCL app. Their OpenCL driver with run it unless it uses proprietary extensions.

AMD had the same oppurtunity to license PhysX as they did Havok ((And they did license havok)). With regards to AMD's case. Intel is focuses on CPU Physics for the time being. So for the time being AMD isn't pressured to do anything specific with Havok other than make sure it run/compatible on the CPU.

Its no secret that AMD has ported a large portion of Havoc to OpenCL.

Both companies would have too license it and build their own drivers to support it. I think its unreasonable to expect Nvidia to build the drivers. But OpenCL port of PhysX. I get that.

Its likely that any computational middle layer that wants to be taken seriously in the future will be based on the OpenCL runtime which will run on either CPU or GPU.

The battle for a portable computational runtime is over and it is OpenCL. CUDA doesn't really offer anything that OpenCL does and OpenCL implementations will be available from all the hardware vendors. Ergo, dead CUDA. Only thing that could potentially derail OpenCL would be CS, but again, dead CUDA.

Its up to the hardware vendor to do the appropriate work on their OpenCL runtime and compiler to get good performance just like it is up to them to write their DX/OpenGL driver.
 
But that's the thing, they don't have to do anything to leverage it if it is an OpenCL app. Their OpenCL driver with run it unless it uses proprietary extensions.

They have to build a driver to do so. Theres a conflict of interest for AMD/Intel right now too be building a PhysX driver for their their GPUS. As their CPU is still their primary business. PhysX is highly optimised/motivated towards GPU acceleration right now. Including most of its implementation. But you can still run PhysX on the CPU if you want .

Its no secret that AMD has ported a large portion of Havoc to OpenCL.

Yes but how are they leveraging it? I'm all for GPU accelerated havok from AMD. Right now I'd say AMD's dedication to GPU Physics is a bit unproven. Lets say Nvidia does port PhysX to OpenCL. Which I suspect will eventually happen. Will AMD port it for their own GPU drivers?

CUDA was here first and allowed Nvidia to get the leg up on GPU COMPUTE. But during the time of OpenCL creation. AMD chose Havok. To license it mind you. AMD is obviously capable of building and porting Havok to OpenCL with a license. Why couldn't they have done the same with PhysX? Lets be clear that AMD is paying for the right to port Havok to their OpenCL Driver. Its not just because OpenCL arrived that its automatically free usage of Havok.

I see no reason for Nvidia to "Do" the work for everyone else. To gain leverage from their rather large investment into GPU PhysX. Nvidia could build an OpenCL PhysX library. But it would still be up to the IHV to build a driver to support it. Where Nvidia's responsibility does lie is improving there "CPU" support of PhysX for when GPU acceleration is not an option. Better multi threading support ect. But I dont suspect that will mean that the "CPU" PhysX will ever be fully caught up to GPU PhySX. Especially in Particle Simulation.

The battle for a portable computational runtime is over and it is OpenCL. CUDA doesn't really offer anything that OpenCL does and OpenCL implementations will be available from all the hardware vendors. Ergo, dead CUDA. Only thing that could potentially derail OpenCL would be CS, but again, dead CUDA.

While this may be true that CUDA's may be ending its lifetime. It's not the end of the road for Nvidia at all. OpenCL is basically built off the backbone of CUDA and Nvidia supports it. So long as the GPU is doing the compute work Nvidia is quite happy to support OpenCL and even DirectX Compute.

*Edit* Just want to add something. Theres no reason for CUDA to just die. Since OpenCL is basically running off of Nvidia's CUDA right now. Its not like there was alot of work to be done Nvidia's part here.
 
Last edited by a moderator:
I see no reason for Nvidia to "Do" the work for everyone else. To gain leverage from their rather large investment into GPU PhysX. Nvidia could build an OpenCL PhysX library. But it would still be up to the IHV to build a driver to support it.

AMD has already shown OpenCL drivers and demos...

I fail to see why they'd have to do anything special to run a OpenCL implementation of the PhysX API and its not like it would be the only application using OpenCL...

I still don't understand why, when the GPU is the limiting factor in most games performance, you'd want to put more load on it as opposed to the under utilised CPU. The majority of hardware PhysX demos I've seen where you have a on/off PhysX effect (Mirrors Edge for example) show a significant performance hit for a few bits of fluttering fabric and shattering glass.

That and watch PhysX CPU usage in software mode - the basis of GPU computing is parallelism, yet the CPU will still sit with 1 core under full load and the others idle - it seems fairly clear that the CPU side of the API is intentionally left crippled so as to make the GPU side seem useful.
 
I actually had updated that paragraph to include that I do believe it is Nvidia's responsibility to improve the CPU Side of their PhysX implementation. Before you posted. Not sure if you missed it. Or just clipped it out.
 
I actually had updated that paragraph to include that I do believe it is Nvidia's responsibility to improve the CPU Side of their PhysX implementation. Before you posted. Not sure if you missed it. Or just clipped it out.

Missed it, sorry about that.
 
I suppose this thread is as good as any. This is the display adapter -page from the largest Finnish e-retailer. Down right one can see the current top-10 of best selling cards (Myydyimmät). Currently nine first spots are taken by AMD.

If the situation is anything similar in other countries, Nvidia needs to wake up or something.
 
Status
Not open for further replies.
Back
Top