bit-tech Richard Huddy Interview - good Read

What I don't understand is after the following Quote

The bottom line is we have enough; we don't let triple-A or double-A titles go through the net and in fact it's extremely rare for a Saboteur situation to come up. It's very rare for titles as a whole to go past us without seeing our QA and testing labs along the way. Our team is big enough and actually exceedingly good at some of the things they do. I believe it is smaller than the Nvidia team that's directly related to developer relations, but I have some of the best engineers in the world doing some really great stuff in terms of R&D with graphics optimisations: lighting and shadowing.

Why so many new games come out with glaring problems on ATI cards that don't get fixed until months later, by then most gamers are finished with the game that is having problems.

Games like Borderlands having grey texture bugs for months after release that only got fixed in Catalyst 9.12, Resident Evil 5 having the screen black out during cut scenes, MW2 having bugs with flash bangs, Many have been unable to force AF in Dragon Age since the 9.11's. NFS Shift taking months to be fixed etc

In a pretty big way ATI's rigid one driver per month release schedule has hurt them. Users shouldn't have to deal with such huge bugs on new games, having to wait for months after release before they can play them, bugs which users of Nvidia cards don't have.
 
bugs which users of Nvidia cards don't have.

Sorry? have you ever read one "new nv driver" release thread? The land seems to be too barren for grass to be green on both sides of the fence.
Unless, of-course, you think that This is something that's not described as an issue.
 
Last edited by a moderator:
Sorry? have you ever read one "new nv driver" release thread? The land seems to be too barren for grass to be green on both sides of the fence.
Unless, of-course, you think that This is something that's not described as an issue.

I'm not saying Nvidia drivers are 100% bug free, but for new release games I see far more ATI users having issues than Nvidia users, are you saying it's the opposite?
 
bit-tech: IF - given we don't know yet - when Nvidia's Fermi cards finally arrive they are faster than your HD 5970-

RH: -Well if it's not faster and it's 50 per cent bigger, then they've done something wrong-

I'd be inclined to believe Huddy is thinking the question is about the 5870 when he says this, but I've read the same sentiment before on this forum. Why would a Fermi card need to be faster than a dual GPU Cypress card?
 
I'm not saying Nvidia drivers are 100% bug free, but for new release games I see far more ATI users having issues than Nvidia users, are you saying it's the opposite?

No I'm saying both have their own fair share of issues.
 
I'd be inclined to believe Huddy is thinking the question is about the 5870 when he says this, but I've read the same sentiment before on this forum. Why would a Fermi card need to be faster than a dual GPU Cypress card?

5970 Also does not make sense to me, could be an honest typo (at least they didn't write "Firmy")
 
5970 Also does not make sense to me, could be an honest typo (at least they didn't write "Firmy")

50% bigger than dual RV870 is like 1000mm2 die so everything points out to a typo :smile:

Very good interview indeed!

This bit make my day :LOL::
[Nvidia] put PhsyX in there, and that's the one I've got a reasonable amount of respect for. Even though I don't think PhysX - a proprietary standard - is the right way to go, despite Nvidia touting it as an "open standard" and how it would be "more than happy to license it to AMD", but [Nvidia] won't. It's just not true! You know the way it is, it's simply something [Nvidia] would not do and they can publically say that as often as it likes and know that it won't, because we've actually had quiet conversations with them and they've made it abundantly clear that we can go whistle.
 
The other thing is that all these CPU cores we have are underutilised and I'm going to take another pop at Nvidia here. When they bought Ageia, they had a fairly respectable multicore implementation of PhysX. If you look at it now it basically runs predominantly on one, or at most, two cores. That's pretty shabby! I wonder why Nvidia has done that? I wonder why Nvidia has failed to do all their QA on stuff they don't care about - making it run efficiently on CPU cores - because the company doesn't care about the consumer experience it just cares about selling you more graphics cards by coding it so the GPU appears faster than the CPU.

It's the same thing as Intel's old compiler tricks that it used to do; Nvidia simply takes out all the multicore optimisations in PhysX. In fact, if coded well, the CPU can tackle most of the physics situations presented to it. The emphasis we're seeing on GPU physics is an over-emphasis that comes from one company having GPU physics... promoting PhysX as if it's Gods answer to all physics problems, when actually it's more a solution in search of problems.
Two things for Huddy:
nVidia does not selling CPUs. Why do you expect that they do your job in helping the ISV?
A lot of new games are using or will use CPU-PhysX. Somebody should tell the ISV that they are will get a very bad multi cpu support. And how will it helps nVidia selling more gpus when there is no GPU-PhysX support? :rolleyes:
 
Last edited by a moderator:
Two things for fuddy:
nVidia does not selling CPUs. Why do you expect that they do your job in helping the ISV?
A lot of new games are using or will use CPU-PhysX. Somebody should tell the ISV that they are will get a very bad multi cpu support. And how will it helps nVidia selling more gpus when there is no GPU-PhysX support? :rolleyes:

Mr. Sontin. HOw does nvidia run PhysX on an iPhone, which also doesn't have a GPU ..
doh!
 
Two things for fuddy:
nVidia does not selling CPUs. Why do you expect that they do your job in helping the ISV?
A lot of new games are using or will use CPU-PhysX. Somebody should tell the ISV that they are will get a very bad multi cpu support. And how will it helps nVidia selling more gpus when there is no GPU-PhysX support? :rolleyes:

I think he was talking about underutilizing of PhysX when using multi core CPUs. HardOCP had run these tests, but it showed that the CPU was not being efficiently used at all, so what Huddy is saying is correct. If a supposedly CPU driven feature such as physics can be done by the GPU, but no effort is made to have any CPU assistance for PhysX is the problem.

Here's the link for that site: http://www.hardocp.com/article/2009/10/19/batman_arkham_asylum_physx_gameplay_review/11

HardOCP article said:
With graphics and PhysX running on the same GTX 285 GPU, our CPU usage changed hardly at all. Core 0 was the most heavily loaded, at an average of about 51% and a peak utilization of 97%. Across all four cores, we saw an average utilization of 35%, and a peak of 71%. That is only 4% lower on average than with PhysX running entirely on the CPU.
 
Mr. Sontin. HOw does nvidia run PhysX on an iPhone, which also doesn't have a GPU ..
doh!

Will it helps them to sell more gpus because PhysX runs on the iPhone? :LOL:
Why should anybody use PhysX over Havok or other, free physic engines when the CPU support is in such a bad shape?

I think he was talking about underutilizing of PhysX when using multi core CPUs. HardOCP had run these tests, but it showed that the CPU was not being efficiently used at all, so what Huddy is saying is correct. If a supposedly CPU driven feature such as physics can be done by the GPU, but no effort is made to have any CPU assistance for PhysX is the problem.

No, he is speaking about the whole cpu support of PhysX. And i don't see the problem that nVidia is not optimizing the GPU-PhysX code for cpus. It's not their problem that the cpu vendors don't care about physics in games.
 
No, he is speaking about the whole cpu support of PhysX. And i don't see the problem that nVidia is not optimizing the GPU-PhysX code for cpus. It's not their problem that the cpu vendors don't care about physics in games.

Since when did CPU vendors control physics? They don't. Developers do, and they can do it with a CPU if they so desire. PhysX is a vehicle to make this "easier" for the CPU as a method of acceleration. It's not supposed to be "the only way" to make it accessible to all.

If you consider PhysX as a form of T&L which was whole "give less work for the CPU to do" deal in the DX7 era, then they've done very poor job of it. It's not a great analogy, but for the purposes of what PhysX is supposed to offer, making no effort to make the CPU more efficient in getting better performance is not acceptable if that's what PhysX was claiming in the first place... to help out accelerating physics.

I don't like the idea that PhysX does absolutely nothing for the CPU... you're supposed to help out the CPU to make your own product look better anyways. To say that it shouldn't help out the CPU is utter nonsense for hardware that is supposed to reduce CPU overhead for physics.
 
Last edited by a moderator:
Since when did CPU vendors control physics? They don't. Developers do, and they can do it with a CPU if they so desire. PhysX is a vehicle to make this "easier" for the CPU as a method of acceleration. It's not supposed to be "the only way" to make it accessible to all.

So, what is the point of the PhysX bashing from Fuddy? There are developers who used or are using PhysX as their major cpu physics engine. It's not nVidia's fault that they don't use all free cores of a cpu.

If you consider PhysX as a form of T&L which was whole "give less work for the CPU to do" deal in the DX7 era, then it's done very poor job of it. It's not a great analogy, but for the purposes of what PhysX is supposed to offer, making no effort to make the CPU more efficient in getting better performance is not acceptable if that's what PhysX was claiming in the first place... to help out accelerating physics.

I see the point, but GPU-PhysX is only one way of the whole "GPU-Physics" thing. Until no independent developer will use CPU- and GPU-PhysX in a programm, i see no problem that nVidia will not optimize their code.
 
So, what is the point of the PhysX bashing from Fuddy? There are developers who used or are using PhysX as their major cpu physics engine. It's not nVidia's fault that they don't use all free cores of a cpu.

He's saying that prior to acquiring PhysX, the company was doing just dandy with multi-core acceleration. Since tests prove that no more than dual-core is being accelerated, he's saying that the tech is being "held back" intentionally to want you to buy the video card than a CPU. I should think that upgrading from singlel core to a Core i7 would actually help physics out, but that's not really the case under this scenario.
 
He's saying that prior to acquiring PhysX, they were doing just dandy with multi-core acceleration. Since tests prove that no more than dual-core is being accelerated, he's saying that the tech is being "held back" intentionally to want you to buy the video card than a CPU.

I don't see tests. He is claiming something.
This came from Futuremark:
Jussi Markkanen: Our game engine is designed specifically for multi-core processors. We can parallelize all CPU-heavy tasks. For example rendering, physics simulation, audio, network handling and game logic are all executed in separate threads. Some other minor tasks are parallelized as well. Currently we have limited multicore scaling up to 16 cores. Here is an image of one of our test runs with heavy CPU load in a 16 core-machine. It demonstrates how the load is nicely distributed across the processors.
http://www.pcgameshardware.com/aid,...xclusive-and-more-technical-details/Practice/
 
I don't see tests. He is claiming something.

I showed you a link with some analysis. Your link doesn't have any.

You have to at least see the impact with some benchmarks than just a Q&A.

I'm not saying that the Batman game was any good as a comparison (we don't know how optimized it really is), but if you're gonna market a marquee game with PhysX, you need to be able to see the performance differences that one can properly quantify.
 
Last edited by a moderator:
I showed you a link with some analysis. Your link doesn't have any.

You have to at least see the impact with some benchmarks than just a Q&A.

I'm not saying that the Batman game was any good as a comparison, but if you're gonna market a marquee game with PhysX, you need to be able to see the performance differences that one can properly quantify.

I know a lot of other Games with physics which only use 2-3 cores. It's wrong to assume that there is a bad multi cpu support because games only use 2-3 cores.
Look at other games which don't use physx:
Dirt2: http://www.hardocp.com/article/2009/12/23/dirt_2_gameplay_performance_image_quality/9
Resident Evil 5: http://www.hardocp.com/article/2009/11/03/resident_evil_5_gameplay_performance_iq/9
Wolfenstein 2: http://www.hardocp.com/article/2009/09/01/wolfenstein_gameplay_performance_iq/8

All three games use 2 cores of the cpu. What do you think? All three game developers want to harm the gamer?
 
We're exclusively talking about PhysX here with multi-core processors... since that is the scope of Huddy's comments. Those benchmarks don't show PhysX on and off.
 
Back
Top