How will NVidia counter the release of HD5xxx?

What will NVidia do to counter the release of HD5xxx-series?

  • GT300 Performance Preview Articles

    Votes: 29 19.7%
  • New card based on the previous architecture

    Votes: 18 12.2%
  • New and Faster Drivers

    Votes: 6 4.1%
  • Something PhysX related

    Votes: 11 7.5%
  • Powerpoint slides

    Votes: 61 41.5%
  • They'll just sit back and watch

    Votes: 12 8.2%
  • Other (please specify)

    Votes: 10 6.8%

  • Total voters
    147
Status
Not open for further replies.
So it is just the right time to come out :)
Anyway, at that time Open CL did not exist.

OpenCL is still vapourware, as is Havok with OpenCL support.
My point is: just because a company demonstrates a technology, doesn't mean it's on the verge of release. Some of the stuff just never amounts to anything.
In fact, nVidia also had some smoke particle sample at the introduction of the 8800 series.
Back then they hadn't finished Cuda yet, and I doubt that they had ever even thought about acquiring PhysX.
So, although nVidia actually did deliver on their promise of physics acceleration a few years later, the actual technology of the original demo had nothing to do with the final product.

They need also to stop Nvidia gaining ground with their middleware.

Is nVidia gaining that much ground then? Seems like pretty much all PhysX titles are based on the UT3 engine. They wouldn't have used Havok anyway.
When Intel has some actual hardware on the market, it's soon enough to start promoting Havok with OpenCL. Havok still has a very strong reputation anyway, since it's been around for so long, and has delivered many successful titles.
 
OpenCL is still vapourware, as is Havok with OpenCL support.
My point is: just because a company demonstrates a technology, doesn't mean it's on the verge of release. Some of the stuff just never amounts to anything.
In fact, nVidia also had some smoke particle sample at the introduction of the 8800 series.
Back then they hadn't finished Cuda yet, and I doubt that they had ever even thought about acquiring PhysX.
So, although nVidia actually did deliver on their promise of physics acceleration a few years later, the actual technology of the original demo had nothing to do with the final product.

AFAIK Nvidia already has a OpenCL compatible driver, and ATI wil follow (it´s in their interest to do so). As for the other topics, we will wait and see, do you agree?

Is nVidia gaining that much ground then? Seems like pretty much all PhysX titles are based on the UT3 engine. They wouldn't have used Havok anyway.
When Intel has some actual hardware on the market, it's soon enough to start promoting Havok with OpenCL. Havok still has a very strong reputation anyway, since it's been around for so long, and has delivered many successful titles.

Physx is gaining ground, yes, also because nvidia is pushing it so much. And GPU accelerated physics is a great marketing argument, on the paper.
 
INothing to do with faith in it improving, everything to do with faith that nVidia is going to keep it proprietary and it won't be widely adopted. :yep2:

uh? I´m not "giving no chance to Physx". I already said that if they port it to OpenCL all the customers will have a gain. But I see difficulties in the present implementation and in the future if it will stay as a product tied to one vendor only.

Ok, I can't keep up with you guys any more. One minute you're talking about the quality of PhysX effects (to which I responded) Now it seems you're back on the proprietary thing. :)

Fluttering paper as a graphics effect is just as good looking in a game when it's driven artistically instead of with physics, and without a massive frame rate drop as your graphics card loses power to physics calculations. Sometimes you even get problems such as rag-doll looking utterly unnatural.

Artistically emulated fluttering paper? :D You can't artistically emulate anything that responds to a dynamic environment.

Whether it's Havok or Physx is pretty immaterial if it doesn't bring anything meaningful, and is just used (for instance) as a marketing point to sell one IHVs hardware instead of adding anything of worth to a game. You don't get a pat on the back from the customer by saying "gee, but we're just as pointless as all the other middleware."

Sure, but why are you imposing your subjective definition of "meaningful" and "pointless" on everyone? Honestly, I have to laugh at this "physics must be meaningful" stance. Out of curiosity, how much of what a GPU does today is meaningful to you and why?
 
AFAIK Nvidia already has a OpenCL compatible driver, and ATI wil follow (it´s in their interest to do so). As for the other topics, we will wait and see, do you agree?

I know nVidia has an OpenCL driver, I'm in the registered GPU developer program.
I also know that their public released drivers won't run the OpenCL code. Hence Havok won't run... vapourware.
And nVidia has no reason to enable OpenCL until AMD has OpenCL aswell.

Physx is gaining ground, yes, also because nvidia is pushing it so much. And GPU accelerated physics is a great marketing argument, on the paper.

It may be great for GPU sales (which Intel doesn't care about yet, because they haven't entered the market yet), but from the development side it's not a big deal. As I already pointed out, it's all just a single engine, which had been using PhysX even before nVidia came into the picture. So these games weren't going to use Havok anyway. No ground gained.
 
OpenCL is still vapourware

I stopped reading right here. Guess you never heard of Snow Leopard?

Imo as long as all the physics out there are effects physics, GPU physics will not take off.. especially if it's proprietary stuff like PhysX which only works on hardware from one IHV. Give us gameplay physics already.
 
I stopped reading right here. Guess you never heard of Snow Leopard?

Oh yea, that fantastic gaming platform, with DX11 and PhysX and all that... (which we were talking about, guess you didn't read that part of the discussion either).
Besides, from what I read, the OpenCL support on Snow Leopard is still VERY shoddy, to say the least, especially from AMD's side.
 
Ok, I can't keep up with you guys any more. One minute you're talking about the quality of PhysX effects (to which I responded) Now it seems you're back on the proprietary thing. :)

It´s your problem: I made my statement preatty clear. Physx has some problems in the current games using GPU acceleration, on the performance side and in the fact that improvements in quality are not groundbreaking from a graphic performance and do not improve gameplay too much (this is subjective, of course). This for the present.
For the future, it could have problems because is tied to one hardware vendor, and if competition will succeed to have an Open CL GPU accelerated middleware, I see that or Physx will be ported to OpenCL too, or it´s likely to be abandoned.


I know nVidia has an OpenCL driver, I'm in the registered GPU developer program.
I also know that their public released drivers won't run the OpenCL code. Hence Havok won't run... vapourware.
And nVidia has no reason to enable OpenCL until AMD has OpenCL aswell.

And your opinion is that they will have it soon or not? My opinion is that they will have it soon.

It may be great for GPU sales (which Intel doesn't care about yet, because they haven't entered the market yet), but from the development side it's not a big deal. As I already pointed out, it's all just a single engine, which had been using PhysX even before nVidia came into the picture. So these games weren't going to use Havok anyway. No ground gained.

When Physx was bought by Nv, what was their market share?
 
And your opinion is that they will have it soon or not? My opinion is that they will have it soon.

I don't think that OpenCL has much to do with it.
Intel probably won't release a GPU-accelerated version of Havok until they have a GPU on the market. So when AMD releases OpenCL support is irrelevant.

When Physx was bought by Nv, what was their market share?

I don't know, but as I say, UE3.0 already used PhysX, and UE3.0 is pretty much the only engine used in all these PhysX titles that nVidia is promoting.
 
This for the present. For the future....

Yeah we can break up the discussion into timelines of present and future but the argument doesn't change much. All the guesswork about what will happen in the future is of zero practical value as we don't know how PhysX will evolve either.

neliz is right. Nvidia will leverage PhysX as a proprietary advantage over the competition as long as they possibly can and as long as they can claim a competitive advantage from doing so. As of now they are the first mover and therefore should enjoy the fruits of that status. Now will they be complacent and let Havok come back and blow them away like so many people predict? Well that remains to be seen.
 
What about Force Unleashed? I haven't played it, but I hear it uses physics quite a bit for gameplay and it's finally coming out for PC soon. Do we know what physics API it used? (Pixelvue, wasn't it?)
 
What about Force Unleashed? I haven't played it, but I hear it uses physics quite a bit for gameplay and it's finally coming out for PC soon. Do we know what physics API it used? (Pixelvue, wasn't it?)

Are you talking about DMM (Digital Molecular Matter). I believe this is all done through CPU. But what if they update this to use compute shader for DX11?
 
Yeah, the DMM stuff. I thought that was pretty f-ing revolutionary when I first heard about it, then I abandoned hope when they kept un announcing the title for PC.

Does it matter if it's DX11 or not or GPU/CPU? Hell, I'll take it through CPU if it works/plays alright. I'm not picky. :)
 
Yeah, the DMM stuff. I thought that was pretty f-ing revolutionary when I first heard about it, then I abandoned hope when they kept un announcing the title for PC.

Does it matter if it's DX11 or not or GPU/CPU? Hell, I'll take it through CPU if it works/plays alright. I'm not picky. :)

Neither am I ;). However, I do believe they need to actually start promoting this. It should be allowed on the PC side by now (if not soon). Add this with Natural Motion Euphoria (which I believe is regarding AI behavior) and one should have a AAA title on their hands (pending storeline, gameplay, etc.).

I find it odd that this isn't being marketed for the PC segment yet (pending when this is actually allowed). I too thought this was pretty revolutionary back then and still do today. Image Crytek using either DMM or Euphoria in Crysis 2. Image this old demo using DX11 tessellation for example :oops:
 
Just took a quick look, but strangely they don't market it as a complete physics engine ... and Force Unleashed also uses Havok. I guess using the DMM method for everything is too expensive. Still, it's "just" flexible/breakable joints ... I don't see why they can't do the simulation with havok, maybe the artist tools for DMM are a lot better?

Although it's a cool touch that they support non elastic deformation too ... don't see that too often.
 
Last edited by a moderator:
You already answered it yourself, partly...
The first generation of DX11 games won't make full use of its features. So for the most part, you'll be running DX9/10-class code, but faster.
AMD doesn't have a physics solution, so that ability of the architecture isn't going to be leveraged either.

nVidia may not yet have a DX11 part, but they do have DX10 parts, with a physics solution, and are actively promoting this to developers.

So the net result is ~DX10-class graphics on AMD, vs DX10-class graphics with physics on nVidia.
AMD will either get poor framerates with CPU physics, or will get less detailed graphics.
If I understand you correctly, you are saying that last generation hardware is actually a better buy, and a better value? All because of PhysX? What about your purchase 1-2 years from now, DX11 has at least some future proofing built in. Buying a DX10 class part this year vs. a DX11 class card, it sure seems like the smart money is on a DX11 card. Then again, you seem to be saying that PhysX trumps this?
 
If I understand you correctly, you are saying that last generation hardware is actually a better buy, and a better value? All because of PhysX? What about your purchase 1-2 years from now, DX11 has at least some future proofing built in. Buying a DX10 class part this year vs. a DX11 class card, it sure seems like the smart money is on a DX11 card. Then again, you seem to be saying that PhysX trumps this?

Well, I tend not to look further than 2 years ahead, because there will be a whole new generation of far more powerful hardware around by then.
So on a 2-year refresh cycle... well if in those first 2 years, the DX11-content doesn't exceed the DX10-content much (and if the move from DX9 to DX10 is anything to go by...), then yes, PhysX could actually be the better value (better performance, better graphics, all the main selling points for DX11?).
At least in the first year of this DX11 hardware, I don't expect to see much in terms of physics support, and probably not too much DX11 usage either.
So yes, what I'm saying is: is this DX11 future proofing going to be relevant for the lifetime of this particular card?
I think in the first 2 years of DX10 hardware, nothing much happened. I bet a lot of people upgraded their early 8800s without ever running DX10 at all, let alone using Cuda or PhysX software, or the DirectCompute and OpenCL that it is capable of, but isn't yet released to the public.
 
Well, I tend not to look further than 2 years ahead, because there will be a whole new generation of far more powerful hardware around by then.
So on a 2-year refresh cycle... well if in those first 2 years, the DX11-content doesn't exceed the DX10-content much (and if the move from DX9 to DX10 is anything to go by...), then yes, PhysX could actually be the better value (better performance, better graphics, all the main selling points for DX11?).

But do any games with PhysX support actually have improved performance when it is enabled? Admittedly, I've not really looked into this much but, from what I've heard, thought the opposite case was true?
 
But do any games with PhysX support actually have improved performance when it is enabled? Admittedly, I've not really looked into this much but, from what I've heard, thought the opposite case was true?

No, I mean that if you have a non-PhysX card, you will suffer great performance degradation if you want to keep the same level of visual detail.
So in that sense, PhysX-capable cards have the better performance (or the better visuals, depends on which way you go).
That's my point, missing out on PhysX might be a bigger deal than missing out on DX11, in terms of visual effects and performance, at least today, and in the near future.

(As I already said, who cares about 'faster'? It should be about creating more detailed, more dynamic environments).
 
Status
Not open for further replies.
Back
Top