Call of Juarez DX10 Benchmark available

Discussion in '3D Hardware, Software & Output Devices' started by AlexV, Jun 8, 2007.

  1. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,528
    Likes Received:
    107
    Your guess is wrong, that`s not what nV is complaining about. And I`m far too sleepy now to give a fully fledged verbose explanation. Simply take a backseat, think, and you`ll see it too, I think you simply got too caught up in this line of reasoning;)TRAA/ADAA aren`t gone...this is TRAA/ADAA, but implemented at the other end of the stick:instead of having a to switch something in the card's control panel, the ISVs include support right off the bat. It`s as simple as that, no complex stuff to factor in.
     
  2. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    Yeah, I know that this is not really what they are claiming, but it makes me wonder if Techland made really such a mess because I think it's easier (and less detectable) to make the bench optimized for R600 only, instead of having to use a "kill that VGA performance" switch. Maybe they really had problems with G80 MSAA.
    Anyway, if this is TAA/ADAA, it seems a bad implemetation of it :???:
    (or maybe the SSAA Techland is claiming is real TAA/ADAA,as I think that supersample the whole scene is really costly)
     
  3. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,528
    Likes Received:
    107
    You're missing the point and the meaning of what i'm saying with definite passion.Oh, well.
     
  4. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    What I got is that you're saying that AA is swithched at the application level instead of having a settig in the CP, so the application is controlling how AA is performed.
    And it's MSAA creating them, regardless of TAA is switched on or not, because TAA does not help to solve these problems, right?
    (Sorry for yesterday but I came from a 13-hour session at work, so I was so tired that my brain was a little messed up :p )

    I'm only saying that the rendering result with MSAA look ugly, not on par with what should be expected from a "new tech" game, this being the result of MSAA algorithms or not, and I believed that TAA/ADAA algorithms should have been smart enough to avoid this mess, but at this point I'm wrong on it. Let's hope for EATM :p
     
    #64 leoneazzurro, Jun 16, 2007
    Last edited by a moderator: Jun 16, 2007
  5. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,528
    Likes Received:
    107
    Ok, perhaps now you`re awake and you`ll catch my meaning. This is EATM, without the E(I presume the E stands for Enhanced in ATi lingo). It`s the same technique ATi will implement in their drivers as an answer to nVs multisampling transparency, but ATi will tweak it so that the artifacting is alleviated. It`s not rockety scientific at all...the first time EATM showed up in drivers, it looked kindof like that, but in subsequent appearances it was obvious it was improving. This is also possible here and in all other titles that`ll implement ATM as a means to deal with transparent title, and I`m quite certain devs will fix it(no way in hell will PGR4 or Age of Conan come out with so obvious artifacts). This is simply an added boon, as some don`t tinker with CP transparency AA, or don`t want it on at all times or whatever, and I feel this is the proper way to do things:the dev should ask through his app for AA, AF, whatever, and it should be provided, the forcing through CP method should simply be something used for extreme cases.

    And this is fairly irrelevant for the shenanigans that nV alludes to. Yes it`s but ugly, yes it`s fairly easily fixable/alleviated, no, I`m quite certain this isn`t what got everyone`s panties up in a bunch over at Santa Clara;)
     
  6. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    Yep. agree 100%
     
  7. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    15,349
    Likes Received:
    2,543
    What nvidia claim in case anyone doesnt know

    "NVIDIA's DX10 hardware-based multi-sampled Anti-Aliasing resolve is disabled in the benchmark, with all NVIDIA GPUs now being forced to use software-based AA resolve similar to the ATI DX10 GPU (which lacks hardware AA resolve). This artificially deflates performance on NVDIA graphics cards and users are not getting the benefit of a key NVIDIA hardware feature.

    A hidden "ExtraQuality" parameter only accessible via a configuration file is automatically enabled when the benchmark is run on NVIDIA hardware, no matter what the setting in the configuration file. This setting has no apparent visual quality enhancement, but reduces NVIDIA GPU performance.

    Changes to shaders that deflate NVIDIA performance by approximately 10-14%, without improving visual quality on either NVIDIA or ATI GPUs."

    oops allready posted sorry
    forum wont let me delete posts
     
  8. Zvekan

    Newcomer

    Joined:
    May 27, 2003
    Messages:
    136
    Likes Received:
    1
    And this is what developers of CoJ have to say regarding the issue at hand:

    In this message we would like to comment some disputable information that was recently published by nVidia and that is related to the DirectX 10 benchmark mode in Call of Juarez.

    Before the arrival of DirectX 10, previous graphics APIs only allowed automatic Multi-Sample Anti-Aliasing (MSAA) resolves to take place in interactive gaming applications. This automatic process always consisted of a straight averaging operation of the samples for each pixel in order to produce the final, anti-aliased image. While this method was adequate for a majority of graphic engines, the use of advanced High Dynamic Range rendering and other techniques such as Deferred Rendering or anti-aliased shadow buffers require programmable control over this operation due to the nature of the mathematical operations involved. I.e. The previous approach using a simple average can be shown to be mathematically and visually incorrect (and in fact it produces glaring artefacts on occasions).

    All DirectX 10 graphics hardware which supports MSAA is required to expose a feature called 'shader-assisted MSAA resolves' whereby a pixel shader can be used to access all of the individual samples for every pixel. This allows the graphics engine to introduce a higher quality custom MSAA resolve operation. The DirectX 10 version of 'Call of Juarez' leverages this feature to apply HDR-correct MSAA to its final render, resulting in consistently better anti-aliasing for the whole scene regardless of the wide variations in intensity present in HDR scenes. Microsoft added the feature to DirectX 10 at the request of both hardware vendors and games developers specifically so that we could raise final image quality in this kind of way, and we are proud of the uncompromising approach that we have taken to image quality in the latest version of our game.

    "ExtraQuality" is a visual quality setting enabled by default in the DX10 version of Call of Juarez. In benchmark mode, "ExtraQuality" mode does two things. First, it increases shadow generation distance in order to apply shadowing onto a wider range of pixels on the screen, resulting in better quality throughout the benchmark run. Second, it increases the number of particles rendered with the geometry shader in order to produce more realistic-looking results, like for example waterfall, smoke and falling leaves. The attached screenshot illustrates those differences when ExtraQuality is disabled. ExtraQuality is designed as a default setting to reflect the visual improvements made possible by DX10 cards and is not meant to be disabled in any way.

    All updates to shaders made in the final version of the Call of Juarez benchmark were made to improve performance or visual quality or both, for example to allow anisotropic texture filtering on more surfaces than before. This includes the use of more complex materials for a wider range of materials. At the same time we implemented shader code to improve performance on the more costly computations associated with more distant pixels,. Some materials were also tweaked in minor ways to improve overall image quality. One of the key strengths of NVIDIA's hardware is its ability to perform anisotropic filtering at high performance so we are puzzled that NVIDIA complains about this change when in effect it plays to their strengths.

    Default settings were chosen to provide an overall good user experience. Users are encouraged to modify the settings in the CoJDX10Launcher as required. Using larger shadow maps is one option that we would encourage users to experiment with, and in our experience changing this setting does not affect NVIDIA's comparative benchmark scores greatly.

    We are disappointed that NVIDIA have seen fit to attack our benchmark in any way. We are proud of the game that we have created, and we feel that NVIDIA can also be proud of the hardware that they have created. Nonetheless these artistic decisions about the settings of a game rightly belong in the hands of the games developer, not the hardware manufacturer.

    Thank you and don't hesitate to contact us if you have any questions.
     
  9. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,481
    Likes Received:
    500
    Location:
    New York
    All Techland has to do is provide evidence of hardware MSAA not producing the correct image and that will wrap up the whole thing.
     
  10. ERK

    ERK
    Regular

    Joined:
    Mar 31, 2004
    Messages:
    287
    Likes Received:
    10
    Location:
    SoCal
    In my mind, it is already wrapped up.

    I thought Techland's reply was very clear. If they want to use this required feature of DX10, as a developer that is their right. And looking at the shots at the INQ, I think the HDR-correct AA is noticeably better than standard MSAA.

     
  11. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
  12. Zengar

    Regular

    Joined:
    Dec 3, 2003
    Messages:
    288
    Likes Received:
    0
    I don't get it. Why can't one resolve the FP rendertarget first, and apply the tonemapping in the final stage? The only real reason I see for the custom AA resolve is in defered rendering.
     
  13. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Because tonemapping is a non-linear mapping. AA assumes a linear blend of samples.

    When you do what you've suggested, tonemapping after AA, the blended pixels become false-coloured.

    [​IMG]

    Jawed
     
  14. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,481
    Likes Received:
    500
    Location:
    New York
    Yeah it sure is. But Nvidia made it sound like they had intially gone with hardware AA and switched to shader AA in the patch with no IQ difference.

    Which shots? I only see a DX9 to DX10 comparison on the Inq article.
     
  15. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,620
    Likes Received:
    5,635
    So again, my question is...

    Are developers in the future going to take this route in order to have control over how AA is implemented in their games?

    If so, does this mean that ATI was correct in moving AA almost completely to shaders? Sacrificing AA in past and near future titles for more consistent (and possibly greater) performance in future titles?

    As this is a required part of DX10, one then has to wonder if Nvidia will be moving to entirely shader based AA in G90 and higher IF (big if?) developers take advantage of this feature? And I pray to whatever holy being might or might not exist that this is so as I'm so sick and tired of games that don't support AA.

    The interview with SirEric seems to imply that this was their mode of thinking WRT to R600. If DX10 requires this features, and it's something that developers they've talked to have stated they wanted... Then they might have felt this was one area where they could better use their transistors if DX10 titles start to take this route to control (implement?) AA.

    Not to say that this was entirely a good choice as they are gambling that R600 will still be relevant when those games hit the market. However, for someone like me that only infrequently buys a video card it might be a good thing.

    Regards,
    SB
     
  16. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Oh the irony:

    http://forum.beyond3d.com/showpost.php?p=539362&postcount=13

    as it was Kirk who first alerted us (well, some of us, at least) to the fact that AA+HDR should be done programmably for correct results.

    Jawed
     
  17. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Of course. But given their track record of general reluctance to support AA in-game, you've gotta wonder how many more years...

    There is still AA support in the RBEs, to test Z and write samples, which amounts to a fair bit of hardware.

    ATI has also, if the screendoor transparency patent is to be believed, implemented a shader-programmable AA mask, "oSampleMask". Looks like something we'll see in a future version of D3D. But I suspect developers can't fully access this right now. I dunno, I'm confused to be honest, what with transparency AA being a feature of CoJ.

    I honestly don't think ATI is "sacrificing performance" in implementing shader AA. AA performance in some games is "fine":

    http://www.xbitlabs.com/articles/video/display/radeon-hd-2900-games_7.html#sect1

    It's not fine by me though, as I'm bemused that ATI aimed so low. But AA performance, when the driver is "working right", appears to be where ATI wanted it.

    Deferred rendering/shading engines are becoming more and more popular. AA is only possible in these (irrespective of the use of "HDR" in the engine) with a shader AA approach.

    Entirely programmable AA (amongst other aspects of "output merge", which is the blending and testing of pixels or pixel data in the render target) is definitely coming to D3D. But it looks like it's years away.

    Yeah, it's a gradual cut-over from fixed-function to programmable. As far as I can tell, the RBEs in R600 only need to blend colour for one pixel per clock. Prior GPUs had to do this, but also they could blend 2 or more AA samples per clock.

    So ATI has saved blending ALUs in the RBEs, as AA samples are never blended there, just a single colour per pixel.

    At the same time, they've increased the count of Z-test units.

    And for people like me who'll buy a D3D10+ GPU at some point, it's good because they'll have got on top of the drivers by then.

    Jawed
     
  18. ERK

    ERK
    Regular

    Joined:
    Mar 31, 2004
    Messages:
    287
    Likes Received:
    10
    Location:
    SoCal
    Sorry, I wrote that wrong. It should have gone something more like, I've seen HDR-correct AA, like the DX10 shot at the INQ, and prefer it in general to standard MSAA applied to HDR (for reasons explained by Jawed above, I think). I agree the implied comparison was invalid.

    I guess I have to back off from my position a little bit if AMD really did have overmuch influence in the decision to use this AA-method. Any evidence they did? I know of none. Obviously, nvidia's hardware TrAA could likely be made to give a "good enough" image (to some viewers). Be nice to see screen comparisons (as you requested) between the shader assisted HDR correct method, and nvidias best with hw accelerated MSAA. Still, that's just a wish and I don't think the developer is ethically bound to explain themselves, as I still buy their original explanation.

    ERK
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...