AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
battleforge, what does dx11 add ?

The only thing I know for sure is HD-SSAO via compute shaders. Possibly other stuff offloaded to compute shaders?

Yes, currently only the new SSAO is compute shader accelerated. I had although some tessellation stuff prepared but with the RTS typical camera setup there was no visual benefit from it.

But regular BattleForge players know that we add new stuff quite often. Therefore it’s possible that future patches will bring more DX11 goodness. There are still some shaders that might profit from Shader Model 5 but we need to test this more in detail.

Where's Demirug when you need him? ;)

Sleeping.

but the DX11 path of this game work on GF9 too if my test is correctlly .

Correct every system that have the DX11 runtime and at least a DX10 card will make use of the new runtime. But it would only run with the feature level that is supported by the hardware. The new SSAO compute shader requires feature level 11. There is although a feature level 10 pixel shader for the new SSAO but it is slower than the compute shader.

does the DX11 path improve framerate or effect or both in this patch ?

We are seeing improvements up to ~38% in the average FPS when comparing feature level 10 and 11 on the same hardware.
 
We are seeing improvements up to ~38% in the average FPS when comparing feature level 10 and 11 on the same hardware.

Not a tech, could this type of upgrade/patches be applied to other games in similiar fashions that is running dx10 to see improvements in this kind of magnitude?

Richard Huddy stated we see improvements using dx11 but to see it being patched up adding such fps increase wasnt expected from my side.
 
Here is some graph to help our imagination:

bf1920.png


Tanks to PCLab.pl for extra effort testing in DX11!
 
Not a tech, could this type of upgrade/patches be applied to other games in similiar fashions that is running dx10 to see improvements in this kind of magnitude?

Richard Huddy stated we see improvements using dx11 but to see it being patched up adding such fps increase wasnt expected from my side.

If a game make heavy use of post processing (like SSAO) it’s a good target for this kind of performances improvement by using a compute shader instead of the full screen pixel shader approach.

And I guess the only hardware currently is the 5800 series.

Yes as I haven’t seen any other hardware that supports feature level 11, yet.
 
If a game make heavy use of post processing (like SSAO) it’s a good target for this kind of performances improvement by using a compute shader instead of the full screen pixel shader approach.

What is the reason why DX10-level hardware uses a pixelshader instead of CS4.x though?
 
Demirug

This is perhaps only tangentially connected to Tessellation for Battleforge, but there's been quite a few times I've wished I could zoom in closer to more fully appreciate some of the great models in game. And also times I've been wanting to zoom out just a tad bit father as the default zoom out still feels cramped.

Would it be possible to allow users to zoom in closer on models? And in the process also implement your tesselation? The zoom in closer is something I wish for everytime I summon some of the really interesting looking models. And tesselation could be further boon on that.

I realize something like that would probably be low priority since I'm sure most people are fine with the default camera, but one can always wish. :)

Regards,
SB
 
What is the reason why DX10-level hardware uses a pixelshader instead of CS4.x though?

CS4.x is a pain. Bad (or better say no) documentation and a huge list of limitations. We need the pixel shader version anyway (for cards/drivers without CS support). Therefore we decided to not invest the time in a CS 4.x version until we got better documentation.

Just out of curiosity, what was the impact on performance?

To be honest we never tested the actual performances.

Demirug

This is perhaps only tangentially connected to Tessellation for Battleforge, but there's been quite a few times I've wished I could zoom in closer to more fully appreciate some of the great models in game. And also times I've been wanting to zoom out just a tad bit father as the default zoom out still feels cramped.

Would it be possible to allow users to zoom in closer on models? And in the process also implement your tesselation? The zoom in closer is something I wish for everytime I summon some of the really interesting looking models. And tesselation could be further boon on that.

I realize something like that would probably be low priority since I'm sure most people are fine with the default camera, but one can always wish. :)

Regards,
SB

Camera control is a game feature not an engine feature. Therefore I am the wrong person to ask.
 
CS4.x is a pain. Bad (or better say no) documentation and a huge list of limitations. We need the pixel shader version anyway (for cards/drivers without CS support). Therefore we decided to not invest the time in a CS 4.x version until we got better documentation.

Fair enough. As far as I know, nVidia only has CS enabled by default on official Windows 7 drivers (Vista requiring some registry editing to enable it. The drivers do have support, and it works as well as Windows 7 as far as I could tell). AMD doesn't have any official drivers with CS support yet, and neither does Intel.

I hope AMD doesn't get any nasty ideas like not enabling CS on their DX10-hardware at all, to try and make their DX11-hardware more attractive to developers.

I'd be interested to see if DX10-hardware can get some kind of a boost out of SSAO aswell with CS, and if so, how much.
 
AMD Demos Dual Cypress Graphics Card - Hemlock from VR-zone

They are talking 500$ for the 5870X2! Ok what's the deal here? 400$ for the 5870, 500$ for the 5870X2? Either it has too bad scaling, or GT300 will be better than we though! Or maybe someone at AMD saw my Hitler videos cursing at their prices!:LOL: (there's always the possibily we are facing some wrong info though)

========================

Fudzilla had also some news yesterday regarding Asus voltage tweaks. They report some 1000Mhz+ for the core, for both the 5870 and the 5850! Wow! Ok there may be some heat/fan noise issues, but i think that the 5850@1050Mhz could end up faster than the GTX 295,no?!
 
I'd be interested to see if DX10-hardware can get some kind of a boost out of SSAO aswell with CS, and if so, how much.

It would be smaller for multiple reasons. The most significant are:
- Smaller block of shared memory per thread group. (-> more thread groups needed)
- More restrictive shared memory write rules. (-> more threads per group needed)
- No gather 4 support. (more read instructions per thread needed)
 
It would be smaller for multiple reasons. The most significant are:
- Smaller block of shared memory per thread group. (-> more thread groups needed)
- More restrictive shared memory write rules. (-> more threads per group needed)
- No gather 4 support. (more read instructions per thread needed)

Well yea, obviously it's not going to be as good as DX11-level... but it still could be significantly faster than pixelshaders.
 
AMD Demos Dual Cypress Graphics Card - Hemlock from VR-zone

They are talking 500$ for the 5870X2! Ok what's the deal here? 400$ for the 5870, 500$ for the 5870X2? Either it has too bad scaling, or GT300 will be better than we though!

If you look carefully there, you'll see that the supposed Hemlock is 2x5850, roughly. The 5850 doesn't cost 400$. If it is indeed a dual 5850, and not 2x5870, that's pretty weak IMHO.
 
If you look carefully there, you'll see that the supposed Hemlock is 2x5850, roughly. The 5850 doesn't cost 400$. If it is indeed a dual 5850, and not 2x5870, that's pretty weak IMHO.

Well, judging from the power consumption and heat generation of the single 5870 card, it seems that a full 5870X2 just isn't a realistic option at this point. So the logical approach would be to do what nVidia did with the GTX295: Take two 'watered-down' GPUs instead, to keep power consumption and heat in check.

They also 'need' an X2 card, if they want to take the title of performance leader.
I suppose they want it badly enough that they're not going to wait until 5870 production matures and power consumption drops off a bit... or perhaps they've already decided that it's never going to get to the point where a full 5870X2 is going to be feasible.

Would be fun if GT300 turned out to be the smaller and more performance-per-watt GPU of this generation... then AMD and nVidia would have swapped strategies completely :)
 
Well, judging from the power consumption and heat generation of the single 5870 card, it seems that a full 5870X2 just isn't a realistic option at this point. So the logical approach would be to do what nVidia did with the GTX295: Take two 'watered-down' GPUs instead, to keep power consumption and heat in check.

They also 'need' an X2 card, if they want to take the title of performance leader.
I suppose they want it badly enough that they're not going to wait until 5870 production matures and power consumption drops off a bit... or perhaps they've already decided that it's never going to get to the point where a full 5870X2 is going to be feasible.

Would be fun if GT300 turned out to be the smaller and more performance-per-watt GPU of this generation... then AMD and nVidia would have swapped strategies completely :)

They said in the article that both 5870x2 and 5850x2 will be available...
 
About the 5870 X2, is it @ 1600 SP per GPU ? even if it's 725Mhz, if it has all of the shaders working It would be a really good deal!
 
Back
Top