How will NVidia counter the release of HD5xxx?

What will NVidia do to counter the release of HD5xxx-series?

  • GT300 Performance Preview Articles

    Votes: 29 19.7%
  • New card based on the previous architecture

    Votes: 18 12.2%
  • New and Faster Drivers

    Votes: 6 4.1%
  • Something PhysX related

    Votes: 11 7.5%
  • Powerpoint slides

    Votes: 61 41.5%
  • They'll just sit back and watch

    Votes: 12 8.2%
  • Other (please specify)

    Votes: 10 6.8%

  • Total voters
    147
Status
Not open for further replies.
8xMSAA in 2560? Doesn't sound like a very demanding game to me. Two questions for when you have time: how does it fair with 4xAA instead and what's roughly the average framerate that still guarantees playability?

Its not a very demanding game as far as GPU workload is concerned. A single card struggling 1600p with PhysX on isnt all that surprising. At lower resolutions. The game is entirely GPU PhysX limited.


SLIPhysX.png


Due to the Way standard SLI rendering PhysX is setup. At lower resolutions your actually limited by the GPU Compute performance of the primary GPU. As with AFR. Only the primary GPU handles PhysX operations. Obviously as resolution scales up AFR will have a better balance because theres more traditional GPU workload to work with.

Shutting off SLI and just running a second card as a PhysX device at lower res's actually does more than SLI. Or keeping SLI and running a 3rd GPU as a PhysX device independently helps alleviate this bottleneck. Its just important to understand that this is a GPU Compute Bottleneck that can be dealt with a few ways.

multiPhysX.png


9800GTXPhysX.png
 
Last edited by a moderator:
This topic wasn't titled "Is PhysX any good?". I'm sure that this discussion could be continued somewhere else.
 
Who needs Youtube when you've got one of these and one of these :LOL:

But if you don't already have a Nvidia card, you can only see Physx in gameplay movies on Youtube and the like. Someone upgrading in the next six months then has the choice of a very fast, next-gen DX11 card without Physx, or a current-gen DX10 card with Physx.

Personally, it's difficult for me to get excited by something I can't see in my games, when up against a next-gen part that promises the next version of DX that all cards have been advancing with over the last few years since it became a well supported standard. You can't see Physx unless you are already a Nvidia customer, so it doesn't really tempt people away from other brands unless it can become a widely supported industry standard.

That's why I don't think marketing Physx is enough against the marketing of a DX-next generation card. Proprietry features never have been in the past. But to be fair, that's all Nvidia have got to focus on - 120hz monitors + 3D glasses, or academic CUDA isn't going to do it for the mainstream gaming community.
 
And you can't see DX11 unless you're from the future. It's doesn't really make sense to try to weigh future, unknown DX11 effects against current, available PhysX functionality.
 
speaking of physics ive just got red faction 3 and the destruction is done very well and incorporated into the game quite well er as well ;) its not gpu accellerated though (nor physx or havoc)
also in a way its a step back from red faction 1 because the terrain is not destructable only buildings and objects. but it is consistant as all buildings are totally destructable unlike rf1 were you would blast through 10 feet of solid rock only to be confronted with an indestructable wooden door.

But to be fair, that's all Nvidia have got to focus on - 120hz monitors + 3D glasses,

About that, were those 2 monitors (or 1 in the u.k because the non samsung fails british standards and cant be sold here) created to work with 3d vision or is there going to be a move anyway to 120hz for monitors

ps: to nvidia at least support triplehead (you should push for this this chris ;)) it cant compare against 24 monitors but its enough for nearly everyone...
 
Last edited by a moderator:
This topic wasn't titled "Is PhysX any good?". I'm sure that this discussion could be continued somewhere else.

Isn't the topic of this thread how nVidia will counter the release of HD5xxx? As far as I can see the main focus of nVidia is PhysX. Why is then a discussion about PhysX (and nVidias claims regarding it) not relevant to this thread?

I've read many posts in this thread from, what I suppose is, nVidia/PhysX affiniciados claiming it's benefits with regards to AMD/ATis new offerings. Those posts still stand. Why is it all of a sudden not ok anymore? Am I missing something? Please correct me if that is the case.
 
That's why I don't think marketing Physx is enough against the marketing of a DX-next generation card. Proprietry features never have been in the past. But to be fair, that's all Nvidia have got to focus on - 120hz monitors + 3D glasses, or academic CUDA isn't going to do it for the mainstream gaming community.

I wonder if the majority shares your opinion.
DX10 was a pretty underwhelming success, and its problematic adoption may have made people wary of new DX versions not delivering on their promises.
Also, I wonder if other people are as closed towards PhysX as you are. It seems that lots of people respond more in the vein of "Wow, I love those effects, but why can't I get them on my AMD GPU or CPU?". That's where nVidia's marketing pitch comes in.

Another thing is... The DX11 API will actually work on nVidia's hardware (and current drivers, including DirectCompute). Reminds me of the days when ATi only had SM2.0 hardware (Radeon X800 series), vs nVidia's SM3.0 hardware (GeForce 6-series). Since it was both 'DX9', most people didn't really know or care about the difference. ATi never really had problems selling their 'outdated' hardware. I never heard anyone here complain about it either :)
 
Last edited by a moderator:
would they tell you?


Better question would be, How long have they been working on that other GPU?

That kind of information does leak out from time to time in dev blogs and such. But given timelimes, one does suspect that ATI's product this time around got a lot more "reference" kind of usage in the late QA testing for DX11 release and early dev testing as well.
 
This topic wasn't titled "Is PhysX any good?". I'm sure that this discussion could be continued somewhere else.

I was leaning that way for sure, and then I looked at the poll options and, alas, the author (ahem!) opened the door to the relevance of this part of the discussion with option #4.
 
And you can't see DX11 unless you're from the future. It's doesn't really make sense to try to weigh future, unknown DX11 effects against current, available PhysX functionality.

But you never have been able to see the future, same way as you can't tell if there will be any Physx titles on the level of BAA this time next year when Nvidia have brought out their own DX11 cards, or if it's going to fizzle out when Nvidia get interested in something else. Especially when looking at the available Physx functionality today, it's not that impressive. BAA and a few curtains in Mirrors Edge that arn't missed are about it. You can look at the past and see that DX has always trumped proprietary features.

If you don't already have a Nvidia card and want to upgrade, you have the choice of a DX10 card with Physx, or a next-gen DX11 card. I can't see that people will not go for the faster next-gen card, unless they go for brand loyalty. People always want the newer stuff, not the old mutton dressed as lamb. If what we've heard about the OEMs is true, they've already made that choice for their customers, and there's a lot of sales right there.
 
Last edited by a moderator:
It's a ridiculous argument no matter how you look at it. People are saying you can't run PhysX at X resolution and Y settings. But this has always been the case for lots of things. Replace PhysX with "AA", "Very High", "Soft shadows" etc, etc.

Reading this thread you would think that before PhysX people were playing every game in all its glory at 2560x1600 on 6600GTs. Btw, the demo runs fine for me (45fps+) at 1680x1050 4xAA, max PhysX on a single stock GTX 285. Things aren't quite so smooth at 2560x1600 though....

Its not a very demanding game as far as GPU workload is concerned. A single card struggling 1600p with PhysX on isnt all that surprising. At lower resolutions. The game is entirely GPU PhysX limited.


SLIPhysX.png


Due to the Way standard SLI rendering PhysX is setup. At lower resolutions your actually limited by the GPU Compute performance of the primary GPU. As with AFR. Only the primary GPU handles PhysX operations. Obviously as resolution scales up AFR will have a better balance because theres more traditional GPU workload to work with.

Shutting off SLI and just running a second card as a PhysX device at lower res's actually does more than SLI. Or keeping SLI and running a 3rd GPU as a PhysX device independently helps alleviate this bottleneck. Its just important to understand that this is a GPU Compute Bottleneck that can be dealt with a few ways.

multiPhysX.png


9800GTXPhysX.png


Ok, this is how it runs on a high end setup. Could you please test how it runs on a single mainstream card as the GTS250?
 
I wonder if the majority shares your opinion.
...
Also, I wonder if other people are as closed towards PhysX as you are. It seems that lots of people respond more in the vein of "Wow, I love those effects, but why can't I get them on my AMD GPU or CPU?". That's where nVidia's marketing pitch comes in.

If i remember correctly, Ati gained a lot of market share with HD4xxx series in the discrete segment. That can suggest that above mentioned argument ("Wow, I love those effects, but why can't I get them on my AMD GPU or CPU?") wasnt really that convincing. Thats why i dont see how Nvidia can beat "we got much better performance, DX11 and stuff" with it. But im sure they will try to do it of course.

Another thing is... The DX11 API will actually work on nVidia's hardware (and current drivers, including DirectCompute). Reminds me of the days when ATi only had SM2.0 hardware (Radeon X800 series), vs nVidia's SM3.0 hardware (GeForce 6-series). Since it was both 'DX9', most people didn't really know or care about the difference. ATi never really had problems selling their 'outdated' hardware. I never heard anyone here complain about it either :)

Well in my country (Poland) i remember that DX9b vs DX9c debate was really strong and in the end Far Cry patch alone was enough to convince 80% of audience that they will be better of buying DX9c hardware. The reason for this was that everyone knew that DX is an industry standard and Ati will have to catch up eventually.
 
Nvidia's support of 120 Hz monitors is for stereo3D. It has nothing to do with the framerate delta of a 60 Hz Limitation.
 
TGDaily is saying that Nvidia is countering the HD5xxx (in addition to physx > all) with ............ Press releases!
However Nvidia does not seem to want to let Ati get away with that and has been trying to release a few press release spoilers.

It claims that its next series of cards are not going to be based on a redesigned GT200 core, but rather will be based on a completely new core architecture currently codenamed the GT300.

It says that the GT300 has performance significantly higher than anything ATI and its RV870 core can manage. Plus if they use two of them in their GX2 arrangement they will be faster than a well greased leopard on its way to a wildebeest convention.
 
Nice, but we already have pretty good motion blur in DX10. Heck, with Crysis most people didn't even see the difference between DX9 and DX10 when it came to motion blur.
The visual impact is just very minimal at this point.

What kind of motion blur are we actually talking about cause I'm not even aware what the CryEngine2 is using?

I recall leaving in some past racing sims the so called "motion blur" disabled because it was more like a full screen blur (above a specific speed range) then actual antialiasing in the temporal dimension. The latter as far as I'm aware costs quite a bit in performance and shouldn't be that easy to properly implement in an interactive game.

Of course didn't you make that comparison, yet one of the reasons a movie looks a lot closer to what the human eye perceives in real time is that despite a typical framerate of say 24fps sudden motion gets antialiased in the temporal dimension and definitely not with just a few samples.
 
Playing at more reasonable settings like 1920x1080 and with 4xMSAA, I'm sure the cards still manage to get 60+ fps. 'Not fast enough' is just pure rubbish.
Bold statement (FUD?) http://www.revioo.com/articles/a13159_3.html
High end setup, 1680x1050 4xAA, 9800GTX 12fps, GTS280 41fps. Maybe NV will make cards with an N-1 gen GPU along with G200 to compensate for PhysX impact? It raises FPS back into the 70+ range. Still I'd rather have a couple cores from a quad do physics than an extra GPU/card.
 
Last edited by a moderator:
No I don't. I stand by what I said, which was ofcourse about the extra visuals in the context of DX11 and the upcoming hardware generation.
This particular step isn't going to lead to all that much of a visual impact. So it's a dead end trying to go that route. As I said, what are you going to do, tessellate the heck out of everything?
Accelerated physics gives you more bang for the buck. People see more happening on the screen, easier to impress.

If it's going to help visuals to be more realistic then yes. What's wrong with that?
 
Status
Not open for further replies.
Back
Top