NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
PhysX is key point of nVidia's current marketing. They promoted these games as AAA titles, but some people and game reviewers had different opinion. Bankruptcy of GRIN proved, that these games weren't neither successful, nor AAA titles, as nVidia presented and that PhysX is in fact fixed on second-rate/overhyped titles mostly.
 
Ah, in that sense, ok. Not sure that the now "ancient" G.R.A.W PC-ports, a middling re-imagination, and two poor license tie-ins can be seen as a PhysX/Nvidia/tech failure, though, rather than Grin not being capable of making good games on their own.
 
AAA title is pretty subjective these days anyway. Alot of games underhyped but quite fun and sometimes can get good marketing from TWIMP/PhysX hype ect. I think he's right on though. If these developers were willing to work with Nvidia in the past. They will in the future. Games like GRAW were very early incantations of PhysX too and dont quite have the same "bang" factor that Mirrors Edge had for instance.
 
didn't realise Grin had gone I liked the graw games (did they do the console versions as well)

ps: Chris what game would you recommend as a physx showcase ?
 
As a show case? I dont know. Everyone's preferences are different. Mirrors Edge and Cryostasis have some pretty good effects. But I personally didnt enjoy crysostasis. But I'm burned on the first person genre and alot of the problems with the game are mostly that it feels like the game tries to be a mystery/puzzle/acton game at once. But thats just a general problem with the game itself. Not the graphics or PhysX effects.((which I think are quite decent)). I mean I really enjoyed the UT3 PhysX mod pack as well. But I liked UT3. So the range of what you like/expect will vary. But the two games that seem to benefit/immerse most with PhysX I would have to say are Mirrors Edge and Cryostasis right now.

I genuinely enjoyed Sacred2. And I would never had played it if I hadnt heard about the PhysX addon to the game. PhysX does some minor but neat things in that title. You're not gonna find a title with "Can't live without PhysX" effects. In all my experience they are just enhanceed IQ. Which is something I don't mind. Enhanced IQ and effects is nice a addition to any game. And you can turn it all off if you dont want it. So long as its togglable I'm quite happy with it. I think Mirrors Edge uses them really well. And I found the shattering of windows interaction to be quite cool. And Cryostasis Ice melting was cool as well.

The other games the effects have been very subtle. But like I said. Putting PhysX into games is also helping some of these lower these games get higher appreciation/seen more by the press. Sacred 2 being an example of a game I'd never have enjoyed had I not heard of PhysX. Which was a fun type game ((Diabloish)) I would have enjoyed without PhysX. But the extra effects it adds to Sacred 2 certainly don't hurt or distract either.

There are some new titles I am trying to get from Nvidia coming out soon and I'll decide whether I like them or not. But I have seen alot of demos/titles of upcoming stuff and PhysX to me is looking a little more meaty as time goes on. The big problem is timeframe from conception to application. And thats where PhysX still has its uphill battle and convincing of the consumer.
 
Last edited by a moderator:
didn't realise Grin had gone I liked the graw games (did they do the console versions as well)

ps: Chris what game would you recommend as a physx showcase ?

http://www.anandtech.com/mb/showdoc.aspx?i=3623&p=2

You may remember that we weren’t overly impressed with PhysX the last time we looked at it, but NVIDIA promises that the use of PhysX in Batman: Arkham Asylum is beyond anything we’ve ever seen. We’ll find out next month when Batman ships.
 


What exactly are you wanting to see Davros? I would expect fluid/particle interaction to be the majority of what see from PhysX for the neartime future. Because PhysX ((and havok as well)) are still in their infancy stages regarding developer support for advanced Physics effects.
 
just something thats cool, the problem with cryostsis + mirrors edge is if I was told it was all cpu based I would totally beleive it
 
just something thats cool, the problem with cryostsis + mirrors edge is if I was told it was all cpu based I would totally beleive it

And why not? Things like PhysX/Havok are just software library/databases for building Physics effects. Your only limited by the performance of what your accelerating. The fact is everything in Mirrors Edge and Cryostasis can be run off the CPU. Just turn off GPU PhysX acceleration.

Considering the CPU is the "Do all device" expecting GPU PhysX to show something off that the CPU can't do is a tad unrealistic expectation. I dont think anything Nvidia has done has tried to prove the CPU can't do these effects. But rather trying to reinvent the wheel on the way they are accelerate it using the GPU to provide performance advantages. ((IE the strong focus on GPU Compute))
 
I mean since the gpu is supposed to be an order of magnitude faster than a cpu I would like to see something that proves that
 
Well. There are some examples of this already taking place ((Vantage PhysX simulations)). But the examples you want are going to have to fall onto another company. Because its not Nvidia's interests to promote CPU physX. And I think you'll find that there are some places the GPU excels at and some places the CPU excels at. I dont think theres a solution right now that is superior in all aspects and performance between GPU/CPU will depend on numerous variables. The only difference is Nvidia PhysX now and is rolling out titles with it at a pretty steady rate. My own prediction which could be way off is that Havok is really going go through its lift off stages during LRB's coming.
 
Well. There are some examples of this already taking place ((Vantage PhysX simulations)).

Wow a meaningless synthetic benchmark that is specifically targeted by driver optimizations benefits from PhysX. Someone call me when I should start caring...

But the examples you want are going to have to fall onto another company. Because its not Nvidia's interests to promote CPU physX.

If they want PhysX to be popular it is. The reality is that only half the gamers out there have NV cards. SW that works well on 50% of the systems out there is way less attractive than software that works well on 100% of the systems out there.

The big problem here is that nobody else wants to support PhysX because NV can't be trusted as a partner. If you contrast this with AMD's stewardship of HT or Intel's commitment to PCI or USB, it's pretty different. USB doesn't go and disable itself when used in an AMD or ARM system...

And I think you'll find that there are some places the GPU excels at and some places the CPU excels at. I dont think theres a solution right now that is superior in all aspects and performance between GPU/CPU will depend on numerous variables. The only difference is Nvidia PhysX now and is rolling out titles with it at a pretty steady rate. My own prediction which could be way off is that Havok is really going go through its lift off stages during LRB's coming.

The GPU shaders currently excels at graphics. Beyond that in the consumer space, they doesn't seem to do a whole lot. The GPU can take advantage of huge bandwidth...but that's honestly the biggest advantage.

The CPU just wins hands down on anything that is latency sensitive with synchronization, complex data structures, unpredictable control flow, numerical precision, etc.

DK
 
Wow a meaningless synthetic benchmark that is specifically targeted by driver optimizations benefits from PhysX. Someone call me when I should start caring...

I did not say it was really that important. Just an example that it "can" scale better than a properly threaded benchmark that was initially created to test multi core scaling. It is one of the very few examples we have a properly threaded PhysX enviroments.

The big problem here is that nobody else wants to support PhysX because NV can't be trusted as a partner. If you contrast this with AMD's stewardship of HT or Intel's commitment to PCI or USB, it's pretty different. USB doesn't go and disable itself when used in an AMD or ARM system...

Developers trying to get their games extra marketing, better promotion. Dont seem to mind it very much. I see some consumer distaste for PhysX, but I see just as much people who like it.. People are of course entitled to their opinions. But I just dont see alot of it from the developer side of things. Despite all the complaints I see from some certain consumers. I still see PhysX titles coming out at a pretty steady rate. I have yet to see a title where it cannot be toggled off.


I also dont think comparing a "Software" library too hardware standards is really all that apt of a comparison. And Intel is hardly what I'd call "open standard" friendly with their hardware and interfaces. But thats another argument and I dont really care to go there.

Compare it to Havok. Or at the very least OpenCL/DirectX Compute which have the possibility of being competing solutions. When it comes to Havok its far less of a moral highground to stand on due the fact that it has alot of the same inherit limitations of being company driven and licensed software where royalties are payed.

There is the possibility that there could be a non licensed free entirely open standard OpenCL/DirectX compute Physics library. But I havent seen anything like that yet.

The GPU shaders currently excels at graphics. Beyond that in the consumer space,

Yes in the consumer market GPU Compute hasn't hit a home run. I highly doubt Nvidia expected it too At least not at the very start of things. There are some niche applications out there using it though that some consumers might be interested it. PhysX is also a way Nvidia has been able to take GPU Compute and make it useful in some tangible way too consumer market. Like it or hate it. PhysX now is 100x more usable than it was when Aegeia owned it. There is zero advantage of incentive for Nvidia to try and provide a PhysX enviroment that runs solely on their competitors CPU hardware. ((AMD/Intel)). Havok may end up being the Physics solution that solves all these problems. ((I doubt it)) But currently. The GPU "Havok" Physics is even less proven than PhysX is right now. So we're all waiting for this to prove itself. For better or for worse.

I was never trying to argue the GPU is superior to CPU at all tasks. It isn't. But there are definately tasks where the GPU does excel at compared to using a threaded CPU enviroment to do the task. I dont know why you took me post as some kind "extinction of CPUs" post. Or perhaps I just misunderstood you. It wasn't meant as that either way. Nvidia is aiming at taking some of the CPU's lucrative marketshare in heavy math computing areas. Its also using PhysX to promote its GPU Compute in consumer gaming market. It's obviously working for them a bit as they are making money off of it.
 
Last edited by a moderator:
Wow a meaningless synthetic benchmark that is specifically targeted by driver optimizations benefits from PhysX. Someone call me when I should start caring...

Be that as it may the point still stands that multi-threaded CPU physics isn't doing anything special either.

The big problem here is that nobody else wants to support PhysX because NV can't be trusted as a partner. If you contrast this with AMD's stewardship of HT or Intel's commitment to PCI or USB, it's pretty different. USB doesn't go and disable itself when used in an AMD or ARM system...

Comparing physical buses to a software library is quite a stretch. While I agree that PhysX would benefit significantly from more focus on the CPU side of things it would benefit Nvidia even more if they could produce the sort of impact that Davros is asking for.

The CPU just wins hands down on anything that is latency sensitive with synchronization, complex data structures, unpredictable control flow, numerical precision, etc.

And GPU architectures are just standing still when it comes to all those things?
 
The CPU just wins hands down on anything that is latency sensitive with synchronization, complex data structures, unpredictable control flow, numerical precision, etc.

Are you sure about that?

For example, seems like is an unspoken assumption here that because you haven't seen complex data structures on the GPU that somehow that isn't possible? IMO it is much easier to build a high performance multi-reader multi-writer atomic queue for the GPU than it is for the CPU.

Programming on the CPU is a very mature field, while programming on the GPU is new. It is too early to know for sure what the limitations of the GPU are because there has not yet been an end or slowdown in the breakthroughs in research and implementation with regards to new ways to problem solve for this massively parallel hardware.

What you think are limitations on the GPU, might in fact be just limitations on emulating the problem solving methods used on the CPU, and note many of those methods cannot continue to scale even on future CPU hardware...
 
I mean since the gpu is supposed to be an order of magnitude faster than a cpu I would like to see something that proves that

Erwin Coumans has been experimenting with CUDA on his Bullet physics engine for some time now. From late last year on collision detection:

For 8192 fast moving objects on NVidia 8800 GTX, Intel Core 2 3Ghz: CUDA (btCudaBroadphase) 6ms, OPCODE Array SAP 37ms, Bullet dynamic BVH (AABB tree, btDbvtBroadphase): 12ms. When nothing moves, the CUDA broadphase still takes 4ms, whereas SAP and dynamic BVH (btDbvtBroadphase) are practically 0ms. For even larger amounts of moving objects (64k) the CUDA still performs fine, whereas SAP/dynamic BVH grind to a halt. This makes it a good candidate broadphase for environments with huge destruction.

He also present physics solver with CUDA demo on last GDC, based on the work of Takahiro Harada.

That should at least show some potential of GPU accelerated physics from some independent works. And it's open source too! Just in case someone feel challenged to optimize the CPU version.

If they want PhysX to be popular it is. The reality is that only half the gamers out there have NV cards. SW that works well on 50% of the systems out there is way less attractive than software that works well on 100% of the systems out there.

The big problem here is that nobody else wants to support PhysX because NV can't be trusted as a partner. If you contrast this with AMD's stewardship of HT or Intel's commitment to PCI or USB, it's pretty different. USB doesn't go and disable itself when used in an AMD or ARM system...

For now, NV probably more interested to make it an exclusive feature for their own GPUs than anything else. Since one percent increase of their current sales is worth like 20 times more than 20 PhysX licenses peryear. Besides, Unreal Engine and Gamebryo -- two among the most licensed game engines -- already bundled with PhysX. Thanks to it's ability to run on 100% of the systems out there too.

And I think it's naive to believe those CPU companies would share the same level of enthusiasm with NV when it comes to giving GPU a bigger role in computing right now. Why do you think Intel kill Havok FX? And AMD whose GPU business only make up 20-25% of their revenue is hardly convincing in this regard either.
 
Are you sure about that?

For example, seems like is an unspoken assumption here that because you haven't seen complex data structures on the GPU that somehow that isn't possible? IMO it is much easier to build a high performance multi-reader multi-writer atomic queue for the GPU than it is for the CPU.

It's not possible with the current generation hardware. Try traversing a linked list on GPU and CPU. There's a huge performance difference.

Programming on the CPU is a very mature field, while programming on the GPU is new. It is too early to know for sure what the limitations of the GPU are because there has not yet been an end or slowdown in the breakthroughs in research and implementation with regards to new ways to problem solve for this massively parallel hardware.

Programming for a GPU is the same as programming for a cluster. It's all mature, but the hardware is slightly different.

Synchronization is anathema to these architectures, period.

What you think are limitations on the GPU, might in fact be just limitations on emulating the problem solving methods used on the CPU, and note many of those methods cannot continue to scale even on future CPU hardware...

Not really. People have been programming systems far more parallel than GPUs for decades. The only difference is that said systems were expensive and rare, hence fewer programmers.

The problem is that to attack a bigger problem space, the GPU guys need to be able to handle more complex workloads. They could add hardware to do this (e.g. caches, branch predictors), but their computational density will fall as a result.

David
 
I did not say it was really that important. Just an example that it "can" scale better than a properly threaded benchmark that was initially created to test multi core scaling. It is one of the very few examples we have a properly threaded PhysX enviroments.

Developers trying to get their games extra marketing, better promotion. Dont seem to mind it very much. I see some consumer distaste for PhysX, but I see just as much people who like it.. People are of course entitled to their opinions. But I just dont see alot of it from the developer side of things. Despite all the complaints I see from some certain consumers. I still see PhysX titles coming out at a pretty steady rate. I have yet to see a title where it cannot be toggled off.

You're missing my point. PhysX is not an integral part of a game, but an optional one, because it will not run on 100% of their target market. If you want software to be used by game developers they need to know it will not limit their addressable market.

That's one reason why folks like PGI exist, when Intel has their own compilers which can produce great code for AMD - Intel tries to prevent ICC from emitting optimal code for AMD CPUs. That's fine and it's perfectly legit (as is tying PhysX to NV hardware or cuda or whatever), but it limits the market uptake.

I also dont think comparing a "Software" library too hardware standards is really all that apt of a comparison. And Intel is hardly what I'd call "open standard" friendly with their hardware and interfaces. But thats another argument and I dont really care to go there.

It's a software library tied to specific hardware. For all intents and purposes, it is a feature of that hardware.

Compare it to Havok. Or at the very least OpenCL/DirectX Compute which have the possibility of being competing solutions. When it comes to Havok its far less of a moral highground to stand on due the fact that it has alot of the same inherit limitations of being company driven and licensed software where royalties are payed.

Havok though, runs on any CPU - whether it's attached to an ATI GPU, an Intel one or an NV one. So a developer can count on it working. It may not always use the GPU, but it will function.

There is the possibility that there could be a non licensed free entirely open standard OpenCL/DirectX compute Physics library. But I havent seen anything like that yet.

That's when physics on the GPU will start to matter.

Yes in the consumer market GPU Compute hasn't hit a home run. I highly doubt Nvidia expected it too At least not at the very start of things. There are some niche applications out there using it though that some consumers might be interested it.

It was marketed aggressively and didn't live up to expectations. I'm calling a spade a spade here.

PhysX is also a way Nvidia has been able to take GPU Compute and make it useful in some tangible way too consumer market. Like it or hate it. PhysX now is 100x more usable than it was when Aegeia owned it.

I totally agree with you here. Buying new hardware was crazy talk, at least NV can re-use existing hardware for PhysX.

There is zero advantage of incentive for Nvidia to try and provide a PhysX enviroment that runs solely on their competitors CPU hardware. ((AMD/Intel)). Havok may end up being the Physics solution that solves all these problems. ((I doubt it)) But currently. The GPU "Havok" Physics is even less proven than PhysX is right now. So we're all waiting for this to prove itself. For better or for worse.

If NV was smart, they'd get PhysX to run on ATI cards. That would substantially drive adoption, since almost all gamers have ATI or NV cards. That would enable developers to rely on GPU acceleration of physics. As it is, they can't rely on that.

I was never trying to argue the GPU is superior to CPU at all tasks. It isn't. But there are definately tasks where the GPU does excel at compared to using a threaded CPU enviroment to do the task. I dont know why you took me post as some kind "extinction of CPUs" post. Or perhaps I just misunderstood you. It wasn't meant as that either way. Nvidia is aiming at taking some of the CPU's lucrative marketshare in heavy math computing areas.

HPC isn't that lucrative, but it's a good strategy for now.

Its also using PhysX to promote its GPU Compute in consumer gaming market. It's obviously working for them a bit as they are making money off of it.

GPU computer for consumers is irrelevant and I doubt it's made them any money worth talking about. Who says "I'm buying an NV gpu because of physX" - nobody sane. If NV has a better card, they'll buy NV, if ATI does, they'll buy ATI. I doubt physx makes a significant difference.

My point is that if NV was smart, they would try to get PhysX implemented as widely as possible, on all types of hardware. That would get developers to actually use it as an integral element of their tool box. As it is, it's not.

David
 
It's not possible with the current generation hardware. Try traversing a linked list on GPU and CPU. There's a huge performance difference.
I am not sure this in general correct. GPUs are actually good a traversing linked lists as long as you have enough bandwidth to spare, even when there is almost no work done on each list node.
The reason for this is that since SM3.0 GPUs had to be quite good at handling dependent texture reads, which is an operation very similar to traversing linked lists, in fact I wouldn't be surprised if a modern GPU would (with the proper boundary conditions) beat a modern CPU at this task.
 
Status
Not open for further replies.
Back
Top