How will NVidia counter the release of HD5xxx?

What will NVidia do to counter the release of HD5xxx-series?

  • GT300 Performance Preview Articles

    Votes: 29 19.7%
  • New card based on the previous architecture

    Votes: 18 12.2%
  • New and Faster Drivers

    Votes: 6 4.1%
  • Something PhysX related

    Votes: 11 7.5%
  • Powerpoint slides

    Votes: 61 41.5%
  • They'll just sit back and watch

    Votes: 12 8.2%
  • Other (please specify)

    Votes: 10 6.8%

  • Total voters
    147
Status
Not open for further replies.
When you plug AA, it does as expected, and reduces jaggies all over the screen. It improves the overall fidelity of the rendered image.

Physics was plugged as realistic moving people and bodies, buildings exploding into their component parts, cars that handle like the real thing, ie, stuff generally having a realistic looking weight and momentum inside the virtual environment of the game. Games were supposed to get better by becoming more realistic, both in the gameplay and the visual areas.

Instead we've had spurious fluttering flags and scraps of paper being kicked up, usually with framerate drops, and restricted only to Nvidia GPUs.

Stuff has been tacked onto games to give some meaning to Physx without actually being made part of the game. It seems to be "more cowbell" than anything else. We were expecting a lot more, and we weren't expecting Nvidia to start paying devs to exclude similar effects from the CPU, a general principle that I consider bad for the PC gaming landscape in the long run.
 
Last edited by a moderator:
When you plug AA, it does as expected, and reduces jaggies all over the screen. It improves the overall fidelity of the rendered image.

Physics was plugged as realistic moving people and bodies, building exploding into their component parts, cars that handle like the real thing, ie, stuff generally having a realistic looking weight and momentum inside the virtual environment of the game. Games were supposed to get better by becoming more realistic, both in the gameplay and the visual areas.

Instead we've had spurious fluttering flags and scraps of paper being kicked up, usually with framerate drops, and restricted only to Nvidia GPUs.

Stuff has been tacked onto games to give some meaning to Physx without actually being made part of the game. It seems to be "more cowbell" than anything else. We were expecting a lot more, and we weren't expecting Nvidia to start paying devs to exclude similar effects from the CPU, a general principle that I consider bad for the PC gaming landscape in the long run.

I couldn't agree more. It's not a matter of bias or double standards for me, because as I wrote upstream no physics middleware has yet to impress me. I'm all for minor visual enhancements, if they don't tank the frame rate or cause game instability (a la Sacred 2), but as BZB wrote that's not what I imagined the role PhysX or Havok would be taking. Do I want to see physics support flourish and become more integrated into actual gameplay? Sure. But I don't want to see it or any other piece of software fragment the PC gaming market. I don't want to see devs widely supporting something that requires an Intel Core proc, despite the fact that I have a i7 965. I don't want to see PhysX in its current state become the de facto physics middleware despite the fact that I own a GTX 285. All of which is why compute shader is one of the most exciting new features of DX11, since it should homogenize the situation.
 
When you plug AA, it does as expected, and reduces jaggies all over the screen. It improves the overall fidelity of the rendered image.

It does now... back when AA was first introduced, it had huge performance hits, and indeed caused instability or rendering bugs because of the extra memory requirements and not all games being able to handle these 'oddly sized' rendertargets.

Even though AA these days mostly runs without problems (although games like Crysis still struggled with it), and the performance hit isn't as big as it once was... it still does nothing for gameplay.

As for physics in general... well, Half Life 2 may run okay on today's CPUs... but back when I first played the game, I had an Athlon XP1800+, which jumped into the single-digit framerates when I blew up a bunch of crates or such.
Again with Crysis, the included CPU test has such heavy physics loads that my 3 GHz Core2 Duo slows to a crawl.
So why isn't PhysX allowed to have framerate dips? It's inherent with physics. It's inevitable that these games will run much better on future GPUs, just as Half Life 2 runs great on today's CPUs, but certainly not at its release.
 
I'm allowing for GPU Physics to mature and excited about the prospect of what GPU Physics may do. It's nice to see an IHV evangelize and push GPU Physics considering how political it is.

Sure, it would be nice to see performance improve.

Sure, it would be nice to see clarity on open standards.

Sure, it would be nice to see more compelling content in more titles that improve eye-candy, inter-action and game-play.

Certainly not going to ignore PhysX because it isn't "ideal" for all. It's a feature that may enhance the gaming experience, improve realism and immersion.

At least GPU Physics is in the discussion now and has the ability to mature; and without nVidia where would it be?
 
Last edited by a moderator:
AA/AF currently add more to the visuals of games than PhysX has. That may change in the future, mind you.. Who knows what can/will be done with it.

That's utterly subjective...and you're right that PhysX is currently immature and will only improve from here.

Physics was plugged as realistic moving people and bodies, building exploding into their component parts, cars that handle like the real thing, ie, stuff generally having a realistic looking weight and momentum inside the virtual environment of the game. Games were supposed to get better by becoming more realistic, both in the gameplay and the visual areas.

Oh I see what you've done there. So PhysX must provide physics nirvana else it's a failure. Hmmmm curious that other middleware isn't held to the same lofty standard.

And John, of course there's a double standard. Not saying you specifically but many of those criticizing PhysX seem to think Havok is the bees knees. At least you're an indiscriminate critic :) And well, the standards thing has been rehashed to death.....proprietary GPU accelerated physics is better than nothing. But at least "nothing" is freely available to everyone.
 
Funny Volition can manage so much out of a "cripple" CPU based physics engine while PhysX games seem to have the... err.. same... library... effects.

Only Cyrostasis and the UT3 map pack show at least some developer effort.
 
Oh I see what you've done there. So PhysX must provide physics nirvana else it's a failure. Hmmmm curious that other middleware isn't held to the same lofty standard.

I don´t think this is the point, the point is that today GPU accelerated Physics is still immature and don´t changes yet much in the game experience compared to CPU physics, and moreover in many cases it leads to lower framerates than could also impair playability. Havok is not the solution to all evils, but at least is vendor agnostic and when it will get GPU accelerated support, it will work on all cards providing an OpenCL driver. Which is IMO a more desiderable option than having half the games supporting only one vendor and the other half the other one. So if also Physx will go to the OpenCl route, I think most of the people will cease to talk against it. The best thing from the gamer´s point of view will be Microsoft launching something as "Direct Physics", as this will became "the" Physics standard (but of course from the free market point of view is not so easy)

And John, of course there's a double standard. Not saying you specifically but many of those criticizing PhysX seem to think Havok is the bees knees. At least you're an indiscriminate critic :) And well, the standards thing has been rehashed to death.....proprietary GPU accelerated physics is better than nothing. But at least "nothing" is freely available to everyone.

It´s immature GPU Physics tied to one vendor versus CPU Physics available for everyone, not "nothing". Of course GPU Physics will provide much more advanced effects, but at this time we have not enogh examples about "how" good it will be (and which cards will be required to have acceptable performances)
 
GPU accelerated Physics is still immature and don´t changes yet much in the game experience compared to CPU physics....Havok is not the solution to all evils, but at least is vendor agnostic and when it will get GPU accelerated support, it will work on all cards providing an OpenCL driver.

And why would a Havok OpenCL implementation not have the same limitations as PhysX now? They obviously can't use it for gameplay physics as the vast majority of people are still running dual-cores with DX9 hardware. This blind belief in Havok is really unfounded and at this point is just a pipe-dream. In comparison PhysX is an actual working product.

The best thing from the gamer´s point of view will be Microsoft launching something as "Direct Physics", as this will became "the" Physics standard (but of course from the free market point of view is not so easy)

That would be the equivalent of Microsoft making a DirectX game engine to compete with UE3. Don't confuse middleware with an API :)

It´s immature GPU Physics tied to one vendor versus CPU Physics available for everyone, not "nothing". Of course GPU Physics will provide much more advanced effects, but at this time we have not enogh examples about "how" good it will be (and which cards will be required to have acceptable performances)

Why is that comparison relevant? You do know there's CPU PhysX that's just as freely available as Havok right? The vast majority of PhysX implementations do not use GPU acceleration.
 
No one will ever look at a game box and see "Realistic physics" and go hey that looks like fun, I'll buy that game. Physics acceleration is the answer to a question that nobody asked.

Ever play those games that don't allow you to jump, reasoning that you aren't supposed to jump, you're carrying 100 lb of equipments? Yeah, that added fun right, not being able to go over a curb? Realistic isn't always fun.

I hope none of the big 3 (Intel, AMD, NV) buys Bigfoot networks and start hyping their own "Killer NIC" middleware.
 
I think the problem is that NVidia would seem to be saying, "Buy our stuff because, thanks to PhysX, a handful of games will show a marginal graphical improvement".

They are saying this to compete with RV870 which will improve the graphics in every game by allowing higher framerates and higher resolution alongside (hopefully) improved levels of AA and AF.

It's understandable why NVidia is going this route if their new chips aren't ready but it would have been nicer if the had some new hardware out to benefit every game too.

A friend of mine is asking my advice about what new card to upgrade his Radeon 4850 to. I'll probably recommend that he goes with a Radeon 5850 unless the GTX285 really drops in price a lot because I think he would gain more from the performance improvement of the 5850 rather than the sparse PhysX support of a handful of games.
 
They are saying this to compete with RV870 which will improve the graphics in every game by allowing higher framerates and higher resolution alongside (hopefully) improved levels of AA and AF.

Except RV870 would be barking up the wrong tree, because most games these days are console ports, which any reasonable high-end card can already run at insane resolutions, framerates and AA/AF settings.
We don't need 'faster'. At least PhysX is giving us something we actually notice.
 
No one will ever look at a game box and see "Realistic physics" and go hey that looks like fun, I'll buy that game. Physics acceleration is the answer to a question that nobody asked.

Heh, do gamers ask for SSAO or VSM or global illumination? We get new incremental features that add up over time to impact the overall experience. Then if you take away those features we sure as hell notice. Notice how much people ridicule the cloth tearing effects in PhysX? Five years from now let's see how people react if they shoot at cloth in a game and nothing happens. Same thing happened to me after HL2, I expect every box in every game to break into smitherines now.

I think the problem is that NVidia would seem to be saying, "Buy our stuff because, thanks to PhysX, a handful of games will show a marginal graphical improvement".

Lol, yeah well Nvidia's sad marketing is a whole other story :LOL:
 
And why would a Havok OpenCL implementation not have the same limitations as PhysX now? They obviously can't use it for gameplay physics as the vast majority of people are still running dual-cores with DX9 hardware. This blind belief in Havok is really unfounded and at this point is just a pipe-dream. In comparison PhysX is an actual working product.

It will be a product with the same limitations as PhysX AND it will work on all cards. Which is not too shabby. Physx is a (not yet too well) working product only on Nvidia cards. I think all the fuss about Havok is not about it being a better API, but only rearding the availability on a broader range of products .

That would be the equivalent of Microsoft making a DirectX game engine to compete with UE3. Don't confuse middleware with an API :)

Well, there are also Microsoft games and the "Games for Windows" initiative, so I think it´s feasible. But not probable.


Why is that comparison relevant? You do know there's CPU PhysX that's just as freely available as Havok right? The vast majority of PhysX implementations do not use GPU acceleration.

I would only to remind that the confrontation is not "GPU Physics" vs "nothing" but "GPU Physix" vs "CPU Physics", and that at the moment there is no example of a game that changes the game experience enough to make the users desire to have it or bust versus a CPU implementation.
 
We don't need 'faster'.

May the gods strike you for such a blasphemous statement. :p

At least PhysX is giving us something we actually notice.

What someone notices is subjective. I'd certainly notice higher frame rates in certain games, or being able to run at 8x AA rather than 4x AA. And the increased arithmetic processing and fill rates give developers more headroom for their graphics engines. Not every game is a console port. I don't know, this entire argument makes me think of some 3dfx fan (cough, me, cough) arguing against the competition's faster graphics boards because they lacked Glide support.
 
I think all the fuss about Havok is not about it being a better API, but only rearding the availability on a broader range of products .

Which is simply not true. PhysX supports all the major consoles, and also CPUs, PPUs and GPUs.
Havok currently doesn't support any PPU or GPU. So PhysX is the one available on a broader range of products.
 
Which is simply not true. PhysX supports all the major consoles, and also CPUs, PPUs and GPUs.
Havok currently doesn't support any PPU or GPU. So PhysX is the one available on a broader range of products.

I mean "when it will get GPU acceleration via OpenCL, GPU acceleration will be available on a broader range of products". Better this way, isn´t it?
 
May the gods strike you for such a blasphemous statement. :p

We need faster hardware, but we shouldn't be using it just to render the same graphics faster.

What someone notices is subjective. I'd certainly notice higher frame rates in certain games, or being able to run at 8x AA rather than 4x AA.

Well, anything over 60 fps is going to go unnoticed on most monitors anyway. That's my point. Current cards can already run many games at more than 60 fps, why would I want faster than that?
I don't really need anything more than 4xAA either... 4xAA already looks really smooth, especially at high resolutions... I'd much rather have something more dramatic than the move from 4xAA to 8xAA.

I just don't see much of a use for a faster graphics card, if it's just going to render DX9-level graphics, which is what most games have been doing for the past 3-4 years.
I want more physics, tessellation, better animation, AI and that sort of thing... not 'faster'.
 
It will be a product with the same limitations as PhysX AND it will work on all cards. Which is not too shabby. Physx is a (not yet too well) working product only on Nvidia cards. I think all the fuss about Havok is not about it being a better API, but only rearding the availability on a broader range of products .

Yeah but as of now GPU accelerated Havok isn't available on any products so I still don't get it...am I the only one living in the present here? :D

I would only to remind that the confrontation is not "GPU Physics" vs "nothing" but "GPU Physix" vs "CPU Physics", and that at the moment there is no example of a game that changes the game experience enough to make the users desire to have it or bust versus a CPU implementation.

That's very subjective. Just check out the gametrailers comments section for their BAA vids for example. You can try your best to make it seem that unless PhysX rocks your world then it's useless but that's not how it works. Regular people appreciate little crap like smoke and blowing paper. Especially when Batman is spinning around kicking ass in the middle of it. Think of all the little things developers add to games - did the working faucets in Prey or Bioshock change your game experience?
 
I mean "when it will get GPU acceleration via OpenCL, GPU acceleration will be available on a broader range of products". Better this way, isn´t it?

Yea, and when PhysX will get acceleration via OpenCL, that advantage will be nullified again.
Currently we have no idea if or when Havok with OpenCL will ever see the light of day, let alone any games actually using it... Nor do we know what nVidia has planned for PhysX, if Havok finally gets OpenCL support. It's quite possible that nVidia has already prepared for this, and will have an OpenCL version, as to not lose the marketshare it has worked so hard on gaining in the past months.
 
I don't really need anything more than 4xAA either... 4xAA already looks really smooth, especially at high resolutions... I'd much rather have something more dramatic than the move from 4xAA to 8xAA.

Again, subjective. I'd love to have 8x AA enabled in LOTRO, but at 25x16 I'm already pushing my GTX 285 in what it can deliver (every in-game setting maxed out too, naturally). Regardless, AA and AF were mentioned upstream as just quick, obvious examples of how a faster GPU can essentially benefit everyone in practically every single game.

I just don't see much of a use for a faster graphics card, if it's just going to render DX9-level graphics, which is what most games have been doing for the past 3-4 years.
I want more physics, tessellation, better animation, AI and that sort of thing... not 'faster'.

I agree, but I also expect both IHV's high-end DX11 parts will give us what you listed (well, maybe not heavy use of tessellation with the first gen). Which makes me scratch my head over you saying RV870 would be barking up the wrong tree. I have no insight on this, but as current consoles continue aging I suspect more and more devs will do what we're seeing with Batman: AA and Dirt 2: delay the port a bit to add PC-specific features. So recommending a 5850 is certainly in-line with advice I'd give friends now (though my advice would be wait a few months to see what NV brings out if the person can wait).
 
Status
Not open for further replies.
Back
Top