bit-tech Richard Huddy Interview - good Read

We're exclusively talking about PhysX here... since that is the scope of Huddy's comments. Those benchmarks don't discuss PhysX.

But your link? You showed me a game with CPU-PhysX but without an abnormal behaviour.
I think without a proof that PhysX has no > dual core support it's nothing more than fud from Fuddy.
 
Two things for fuddy:
nVidia does not selling CPUs. Why do you expect that they do your job in helping the ISV?
A lot of new games are using or will use CPU-PhysX. Somebody should tell the ISV that they are will get a very bad multi cpu support. And how will it helps nVidia selling more gpus when there is no GPU-PhysX support? :rolleyes:

Except that there ARE multi-thread optimisations as well as CPU optimisations in PhysX on both PS3 and X360. But they actively remove those for the PC... Also there are CPU optimisations for handheld CPU's which are also removed for the PC.

THAT is what he is referring to.

I can also see why Nvidia is reluctant to port PhysX to OpenCL, as OpenCL will by default compile so that it will run across multiple threads taking advantage of all CPU cores. Something Nvidia desperately does NOT want PhysX to do as it would negate some of those "huge" artificial advantages that it currently has when comparing GPU PhysX to CPU PhysX. Despite the fact that there would still be an advantage.

Regards,
SB
 
I think without a proof that PhysX has no > dual core support it's nothing more than fud from Fuddy.

The Computerbase benchmark has all ati cards stuck at 21fps with cpu physx and very low (14%) cpu utilization, get your head out of your ***.
 
50% bigger than dual RV870 is like 1000mm2 die so everything points out to a typo :smile:

He meant this: Fermi=100% * Cypress + 50% * Cypress=150% * Cypress=150% * 334mm2=501mm2. I don't exactly see how that can be interpreted as Fermi being almost 3 times Cypress(1000/334), but English is not my native language, so some subtleties of it may be escaping me.
 
The Computerbase benchmark has all ati cards stuck at 21fps with cpu physx and very low (14%) cpu utilization, get your head out of your ***.

I think you overlooked my comment, that i did not spoke about GPU-PhysX. :LOL:

Except that there ARE multi-thread optimisations as well as CPU optimisations in PhysX on both PS3 and X360. But they actively remove those for the PC... Also there are CPU optimisations for handheld CPU's which are also removed for the PC.
THAT is what he is referring to.

Yeah and where is the proof?

/edit: Without CPU-PhysX in the software there will be no GPU-PhysX. So it makes perfect sense to cripple your CPU-PhysX for pushing your gpus...
 
Last edited by a moderator:
But your link? You showed me a game with CPU-PhysX but without an abnormal behaviour.
I think without a proof that PhysX has no > dual core support it's nothing more than fud from Fuddy.

My link shows exactly what was said. There was a multi-core processor used and tested WITH and WITHOUT PhysX on. CPU usage didn't go up very much, suggesting that the CPU is not being used. It is supposed to go quite a bit up (translating into better performance) or quite a bit down (translating into saving CPU resources).

/edit: Without CPU-PhysX in the software there will be no GPU-PhysX. So it makes perfect sense to cripple your CPU-PhysX for pushing your gpus...

That is contradicted by this:

The Computerbase benchmark has all ati cards stuck at 21fps with cpu physx and very low (14%) cpu utilization, get your head out of your ***.

I would still like a link for that.

If you're going to use CPU-PhysX, at least you would need to see the other cores being used for a proper comparison, or it doesn't become a useful comparison. Might as well disable all the cores!
 
Last edited by a moderator:
When they bought Ageia, they had a fairly respectable multicore implementation of PhysX.

I dunno, that's pretty black and white there with regards to the PC. The fact that it no longer takes advatage of multiple cores... Well, you're obviously going to form your own opinions no matter what is said or presented, so we'll leave it at that.

And if Huddy is right that Bullet and possibly one other physics middleware developer will have a GPU accelerated version of their product out this year, then it'll all be moot soon.

Regards,
SB
 
Last edited by a moderator:
Here's another link about benching PhysX on the same Batman game:
http://www.tomshardware.com/reviews/batman-arkham-asylum,2465-10.html

They use ATI cards with PhysX, so that means CPU-PhysX... you shouldn't expect great performance anyways, but read this specifically...

Rather than clearing things up, the results of this testing have only left us more puzzled. We can recall that the PhysX API is supposed to be optimized for multiple threads, yet on the Core i7, only a single thread seems to be stressed when PhysX is cranked up.
.

In theory, performance shouldn't suck this badly, except there's no effort to improve performance! This is essentially the same analysis when NVidia's hardware is used in my HardOCP link. What's the point of PhysX acceleration if it doesn't get some extra performance from the CPU? Go back to single core? I don't think so.
 
No, he is speaking about the whole cpu support of PhysX. And i don't see the problem that nVidia is not optimizing the GPU-PhysX code for cpus. It's not their problem that the cpu vendors don't care about physics in games.
Dude ... where have you been? There have been multiple threads about this and other than you no one has even doubted the validity of the claims since the original articles by Techreport on core utilization with CPU PhysX. No one, quite regardless of manufacturer bias.

The evidence is clear enough ... this isn't FUD from Richard Huddy, this is just you being extremely dense.

PS. which is not to say it's "wrong" of them to do so, or even "wrong" of developers to accept it in return for free programmers and such ... it's of course perfectly legal and it makes economic sense up to a point (although we will probably get to the point where giving a large part of your customers an underperforming game engine probably will stop making economic sense for developers ... if enough big games don't tow the line). It's just ultimately not advantageous to gamers to create warring camps with proprietary standards (or even more dangerous, games with code intentionally de-optimized for the competition). Without Carmack making sure there was a level playing field by pushing for miniGL there probably would not have been a NVIDIA ... any developer pushing PhysX is not doing gamers a favour (looking at you mr. Sweeney :)).
 
Last edited by a moderator:
He meant this: Fermi=100% * Cypress + 50% * Cypress=150% * Cypress=150% * 334mm2=501mm2. I don't exactly see how that can be interpreted as Fermi being almost 3 times Cypress(1000/334), but English is not my native language, so some subtleties of it may be escaping me.

The question was regarding HD5970 which is dual Cypress = 2x334mm2
That's where all the confusion is coming from :smile:
We all suspect that Huddys answer was regarding HD5870 so as you are pointing single Cypress + 50%.
 
What I don't understand is after the following Quote



Why so many new games come out with glaring problems on ATI cards that don't get fixed until months later, by then most gamers are finished with the game that is having problems.

What I don't know is what an acceptable answer from him to this type of question would be. People who have seen this stuff for a while take everything he would say with an epic boulder of salt. Its one thing for them to be honest and forthright and another for the words to be taken as gospel.

PR now means that no-one believes a word you say either way, so its almost gotten to the point where they may as well not say anything at all!
 
The question was regarding HD5970 which is dual Cypress = 2x334mm2
That's where all the confusion is coming from :smile:
We all suspect that Huddys answer was regarding HD5870 so as you are pointing single Cypress + 50%.

The 5970 bit is obviously a typo. In that the question (and the answer) itself was about the 5870 and it got scrambled as it was uploaded.
 
So can someone who plays/keeps track of STALKER comment on his statement about it being shit while it was a TWIMTBP game then ATI fixed it with later patches, eg including a DX10 AA implementation that works fine on NV hardware?

I liked the answer to 'why buy a 5870' question :D
Can't imagine an NV spokesperson ever saying 'stick with what you have if it works for you, or go for our 3rd ranked card' :LOL:
(actually very carefully worded & perfectly targeted at the cost conscious upgrader who skips a generation or two, which I guess is why it rings my bell)
 
I liked the answer to 'why buy a 5870' question :D
Can't imagine an NV spokesperson ever saying 'stick with what you have if it works for you, or go for our 3rd ranked card' :LOL:

Well, they themselves have been stuck with what they have for quite some time now.
They could certainly say 'stick with what you have if it works for you, or just rebrand it like we do.' :p
 
Just a thought, If you didnt know much about pc hardware you could actually upgrade your nv card 3 times and have the same card you started off with.
 
Just a thought, If you didnt know much about pc hardware you could actually upgrade your nv card 3 times and have the same card you started off with.

I think I've done that before! :oops:

Well I guess its a good way to test out the placebo effect! :D
 
Back
Top