http://www.bit-tech.net/bits/interviews/2010/01/06/interview-amd-on-game-development-and-dx11/5
Some good questions there, interesting read.
Some good questions there, interesting read.
The bottom line is we have enough; we don't let triple-A or double-A titles go through the net and in fact it's extremely rare for a Saboteur situation to come up. It's very rare for titles as a whole to go past us without seeing our QA and testing labs along the way. Our team is big enough and actually exceedingly good at some of the things they do. I believe it is smaller than the Nvidia team that's directly related to developer relations, but I have some of the best engineers in the world doing some really great stuff in terms of R&D with graphics optimisations: lighting and shadowing.
bugs which users of Nvidia cards don't have.
Sorry? have you ever read one "new nv driver" release thread? The land seems to be too barren for grass to be green on both sides of the fence.
Unless, of-course, you think that This is something that's not described as an issue.
bit-tech: IF - given we don't know yet - when Nvidia's Fermi cards finally arrive they are faster than your HD 5970-
RH: -Well if it's not faster and it's 50 per cent bigger, then they've done something wrong-
I'm not saying Nvidia drivers are 100% bug free, but for new release games I see far more ATI users having issues than Nvidia users, are you saying it's the opposite?
I'd be inclined to believe Huddy is thinking the question is about the 5870 when he says this, but I've read the same sentiment before on this forum. Why would a Fermi card need to be faster than a dual GPU Cypress card?
5970 Also does not make sense to me, could be an honest typo (at least they didn't write "Firmy")
[Nvidia] put PhsyX in there, and that's the one I've got a reasonable amount of respect for. Even though I don't think PhysX - a proprietary standard - is the right way to go, despite Nvidia touting it as an "open standard" and how it would be "more than happy to license it to AMD", but [Nvidia] won't. It's just not true! You know the way it is, it's simply something [Nvidia] would not do and they can publically say that as often as it likes and know that it won't, because we've actually had quiet conversations with them and they've made it abundantly clear that we can go whistle.
Two things for Huddy:The other thing is that all these CPU cores we have are underutilised and I'm going to take another pop at Nvidia here. When they bought Ageia, they had a fairly respectable multicore implementation of PhysX. If you look at it now it basically runs predominantly on one, or at most, two cores. That's pretty shabby! I wonder why Nvidia has done that? I wonder why Nvidia has failed to do all their QA on stuff they don't care about - making it run efficiently on CPU cores - because the company doesn't care about the consumer experience it just cares about selling you more graphics cards by coding it so the GPU appears faster than the CPU.
It's the same thing as Intel's old compiler tricks that it used to do; Nvidia simply takes out all the multicore optimisations in PhysX. In fact, if coded well, the CPU can tackle most of the physics situations presented to it. The emphasis we're seeing on GPU physics is an over-emphasis that comes from one company having GPU physics... promoting PhysX as if it's Gods answer to all physics problems, when actually it's more a solution in search of problems.
Two things for fuddy:
nVidia does not selling CPUs. Why do you expect that they do your job in helping the ISV?
A lot of new games are using or will use CPU-PhysX. Somebody should tell the ISV that they are will get a very bad multi cpu support. And how will it helps nVidia selling more gpus when there is no GPU-PhysX support?
Two things for fuddy:
nVidia does not selling CPUs. Why do you expect that they do your job in helping the ISV?
A lot of new games are using or will use CPU-PhysX. Somebody should tell the ISV that they are will get a very bad multi cpu support. And how will it helps nVidia selling more gpus when there is no GPU-PhysX support?
HardOCP article said:With graphics and PhysX running on the same GTX 285 GPU, our CPU usage changed hardly at all. Core 0 was the most heavily loaded, at an average of about 51% and a peak utilization of 97%. Across all four cores, we saw an average utilization of 35%, and a peak of 71%. That is only 4% lower on average than with PhysX running entirely on the CPU.
Mr. Sontin. HOw does nvidia run PhysX on an iPhone, which also doesn't have a GPU ..
doh!
I think he was talking about underutilizing of PhysX when using multi core CPUs. HardOCP had run these tests, but it showed that the CPU was not being efficiently used at all, so what Huddy is saying is correct. If a supposedly CPU driven feature such as physics can be done by the GPU, but no effort is made to have any CPU assistance for PhysX is the problem.
No, he is speaking about the whole cpu support of PhysX. And i don't see the problem that nVidia is not optimizing the GPU-PhysX code for cpus. It's not their problem that the cpu vendors don't care about physics in games.
Since when did CPU vendors control physics? They don't. Developers do, and they can do it with a CPU if they so desire. PhysX is a vehicle to make this "easier" for the CPU as a method of acceleration. It's not supposed to be "the only way" to make it accessible to all.
If you consider PhysX as a form of T&L which was whole "give less work for the CPU to do" deal in the DX7 era, then it's done very poor job of it. It's not a great analogy, but for the purposes of what PhysX is supposed to offer, making no effort to make the CPU more efficient in getting better performance is not acceptable if that's what PhysX was claiming in the first place... to help out accelerating physics.
So, what is the point of the PhysX bashing from Fuddy? There are developers who used or are using PhysX as their major cpu physics engine. It's not nVidia's fault that they don't use all free cores of a cpu.
He's saying that prior to acquiring PhysX, they were doing just dandy with multi-core acceleration. Since tests prove that no more than dual-core is being accelerated, he's saying that the tech is being "held back" intentionally to want you to buy the video card than a CPU.
http://www.pcgameshardware.com/aid,...xclusive-and-more-technical-details/Practice/Jussi Markkanen: Our game engine is designed specifically for multi-core processors. We can parallelize all CPU-heavy tasks. For example rendering, physics simulation, audio, network handling and game logic are all executed in separate threads. Some other minor tasks are parallelized as well. Currently we have limited multicore scaling up to 16 cores. Here is an image of one of our test runs with heavy CPU load in a 16 core-machine. It demonstrates how the load is nicely distributed across the processors.
I don't see tests. He is claiming something.
I showed you a link with some analysis. Your link doesn't have any.
You have to at least see the impact with some benchmarks than just a Q&A.
I'm not saying that the Batman game was any good as a comparison, but if you're gonna market a marquee game with PhysX, you need to be able to see the performance differences that one can properly quantify.