Nvidia GT300 core: Speculation

Status
Not open for further replies.
In how many examples did enabling GPU physics lead to a gain in FPS?
A few had a light negative impact, but the hit was in a number of cases very noticeable.
 
FWIW, I wouldnt take Theo's or hardware-infos info. They've been wrong on too many occasions and according to CJ are just taking stabs in the dark, emailing him asking for confirmation on their guesses. :unsure:

Maybe CJ can enlighten us what is going on in the background? :runaway:
As you probably know I am the site administrator of Hardware-Infos. And I can promise that the published GT300 specs like 512 SPs and 512 Bit are correct. You will see it.

To CJ: It is correct that we are in rarely mail contact, but we have not talked about GT300, yet.
 
In how many examples did enabling GPU physics lead to a gain in FPS?
A few had a light negative impact, but the hit was in a number of cases very noticeable.

I wasn't talking about a gain as such... rather about that the overhead for GPU physics was perfectly acceptable compared to CPU physics.
The difference is ofcourse that this overhead is pretty much constant. So adding more objects/interaction will add little or no extra CPU overhead, but only extra GPU load. As long as the GPU is not the bottleneck, you can keep on scaling up the physics 'for free'. And that's where your gain is. It never was about extra fps.
 
It's also worth distinguishing between effects physics, such as shattering glass and gameplay physics such as destructible environments. Effects physics are practically "just graphics", like motion blur or depth of field, so it seems reasonable to treat them as a "video" option that the user chooses, just like they choose screen resolution or quality of post-processing.

So in this sense it's kinda silly to run graphics (effects physics) on the CPU at all, though I guess there are some graphics techniques that still have a heavy CPU load (e.g. stencil shadows). I dunno it just seems like asking for trouble to put any new graphical techniques on the CPU in a PC system.

D3D11's feature to allow multiple CPU-threads to construct/queue work for the GPU should bring a nice benefit too, even if the GPU is still forced to switch contexts between graphics and CS - though NVidia may be able to get clever there. In theory this is also coming to D3D10, so quite a few gamers should see benefits from this.

Jawed
 
Isn't the whole accelerated game physics euphoria just another factitious marketing trick? Just imagine, what gamers care... I heard several times opinions, like "textures have too low resolution", "shadows are unrealistic", "too little polygons, heads are edgy". But I never noticed, that any gamer would say "there's too little physics" :oops: Nobody missed that, nobody wanted that, but nVidia had it and needed to sell it.

Just like gamers didn't notice, that game XYZ had too low dynamic range, until nVidia touted HDR in NV40 era...
 
And who would actually care? :p
We've seen Mirror's Edge physx and the masses don't care...

Well, I do.
Rigid-body physics are only one part of the equation.
You also want fluids, smoke, cloth and all that. And not entirely coincidentally, these are far more computationally expensive, so much harder to do on a CPU.
In other words, this video isn't very impressive as far as physics go.
 
Isn't the whole accelerated game physics euphoria just another factitious marketing trick? Just imagine, what gamers care... I heard several times opinions, like "textures have too low resolution", "shadows are unrealistic", "too little polygons, heads are edgy". But I never noticed, that any gamer would say "there's too little physics" :oops: Nobody missed that, nobody wanted that, but nVidia had it and needed to sell it.

Just like gamers didn't notice, that game XYZ had too low dynamic range, until nVidia touted HDR in NV40 era...

Thank you :D
That's the point in a nutshell.
 
Well, I do.
Rigid-body physics are only one part of the equation.
You also want fluids, smoke, cloth and all that. And not entirely coincidentally, these are far more computationally expensive, so much harder to do on a CPU.
In other words, this video isn't very impressive as far as physics go.

Hmm, console users (which outnumber PC gamers) don't seem to mind and those companies have made a fortune so far. :p

Perhaps developing those games and make a profit is more involved then sentimental value perhaps?
 
Just like gamers didn't notice, that game XYZ had too low dynamic range, until nVidia touted HDR in NV40 era...
I see nothing wrong with that. If the gamers didn't know HDR exists, would they demand it? So it's fair that they know there's something called PhysX and what it does, now let them decide whether they want it or not.
 
Hmm, console users (which outnumber PC gamers) don't seem to mind and those companies have made a fortune so far. :p

Perhaps developing those games and make a profit is more involved then sentimental value perhaps?

Yea, and at some point we were happy with 8-bit machines with 16-colour 2d graphics.
 
Isn't the whole accelerated game physics euphoria just another factitious marketing trick? Just imagine, what gamers care... I heard several times opinions, like "textures have too low resolution", "shadows are unrealistic", "too little polygons, heads are edgy". But I never noticed, that any gamer would say "there's too little physics" :oops: Nobody missed that, nobody wanted that, but nVidia had it and needed to sell it.

Just like gamers didn't notice, that game XYZ had too low dynamic range, until nVidia touted HDR in NV40 era...
ehh.. gamers are already saying there is too little physics. Its as big as lighting now. we want better AA lighting ( for free...) and the ability to demolish all and any object with "real world" weight and mass... I guess your just not around many gamers... ever been to a LAn party... there is always talk about physics...
 
Yea, and at some point we were happy with 8-bit machines with 16-colour 2d graphics.

Yeah some of the last few posts in this thread boggle my mind. That's some seriously backward thinking. People don't complain about weak physics in games because they don't know any better. Just like people didn't complain about the lack of A/C in the model-T.

One of my biggest pet-peeves is painted on doors and windows. Nobody complains about the widespread use of textures for those but I think they're a glaring example of how far we have to go. I'm looking forward to the day that everything is geometry based. Give me some door knobs damnit.
 
Yea, and at some point we were happy with 8-bit machines with 16-colour 2d graphics.

For it's time that was the best we had. Now, we are seeing diminishing returns on how people perceive physics. And because they don't care doesn't make them ignorant. They can simply grasp the situation for what it is :D
 
ehh.. gamers are already saying there is too little physics.
If gamers talk about physics, they mostly criticise unrealistic behaviour of the effects. Not insufficient quantity of effects. PhysX brings only more of these (still unrealistic) effects, so it isn't solution of this problem. It seems, that inadequate computing power isn't the cause of poor level of realism, because despite these TFLOPS, nothing has changed in this respect.

Secondly, I was talking about pre-nV-PhysX situation.
 
Hmm, Charlie seems convinced that NVidia is building something very similar to Larrabee, with little fixed-function hardware. Does that seem likely?

Could it be that NVidia is simply adding D3D11 features by running them on the shaders, e.g. the tessellator?

If that's so, then that doesn't necessarily mean NVidia's ditching most fixed-function units, such as ROPs.

One of the other bullet points in recent, forward-looking, presentations by Luebke is the idea that the best D3Dx GPU is the one that is optimised to run D3Dx-1 fastest - on the basis that no-one will really notice D3Dx performance, so why bother, just make it conformant.

Could that strategy be playing out in this shader-centric D3D11-features model? Most of the new stuff runs solely on the ALUs. If it performs like shit who cares? The D3D10 features will have maximised die-space and the increase in compute performance is most important for CUDA's sake.

Jawed
 
One of the other bullet points in recent, forward-looking, presentations by Luebke is the idea that the best D3Dx GPU is the one that is optimised to run D3Dx-1 fastest - on the basis that no-one will really notice D3Dx performance, so why bother, just make it conformant.

If Nvidia cripples tessellator performance I wish them a quick and painful death. All the fancy shader effects and high-resolution textures in the world won't make up for woefully lacking geometry complexity. And what's a DX11 techdemo without extensive tessellation?

ECH, actually the underlying problem is that they are ignorant. As we all are until we see something for the first time. And then we raise our expectations. It's called progress.
 
One of the other bullet points in recent, forward-looking, presentations by Luebke is the idea that the best D3Dx GPU is the one that is optimised to run D3Dx-1 fastest - on the basis that no-one will really notice D3Dx performance, so why bother, just make it conformant.

Yea, with the exception of GeForce FX :)
But I don't think D3D11 will be like that. It's too similar to D3D10 + Cuda for that.
 
Status
Not open for further replies.
Back
Top