LucidLogix Hydra, madness?

Discussion in 'Architecture and Products' started by Citrous, Jul 15, 2008.

  1. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    There are profiles in CCC, so you can create a profile with AI on and another with AI off.
     
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,110
    Location:
    New York
    That's not much of a profile. AI is currently an all or nothing deal no?
     
  3. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    Not really. There are probably some optimizations that are not accessible by the CCC and are always on, but there is an "AI Off" option that turns off most of the standard optimizations. Moreover you can choose the "standard" or "Advanced" AI levels. But this is OT, I think.
     
  4. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    So how's that helping me to get AA working in Oblivion without texture filtering optimisations? Or get rid of filtering optimisations without loosing half of performance of my 4870X2?
     
  5. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    You're experiencing massive Texture issues?
     
  6. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    I do. Anisotropic filtering quality on my 5870 even with AI Off sucks compared to my GTX280 in HQ mode. Which is kinda strange considering that it's supposed to be the best on the market, no?
     
  7. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    I have a 4850 Mobility with Cat 9.9, I'm playing Oblivion in these days and I don't experience massive issues (nor minor issues, for what's worth). And a 4850 Mobility is more than enough to play Oblivion smoothly at 1680*1050 (my notebook screen resolution) with AA 8x.
    So, could you please stop going OT and write in the appropriate thread if you are facing issues?
     
  8. compres

    Regular

    Joined:
    Jun 16, 2003
    Messages:
    553
    Likes Received:
    3
    Location:
    Germany
    You own both a X2 and a 5800 series GPU? Oo
     
  9. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Don't forget his X1950XTX
     
  10. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    I still has my 3870X2 too, yes. Why, you don't believe me?
     
  11. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
  12. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    Sounds pretty good. And apparently it's working even in a dual monitor configuration, which is great !
     
  13. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Seamless scaling of ocl kernels on a quad 5970?
    :razz:
    <3
    :runaway:

    Ok, prolly not what I am thinking. It seems that they are distributing different draw calls to different gpu's and them compositing them. I wonder what they have in mind for scaling ocl, dxcs. But yeah, this could be wonderful if integrated right on cpu's. More likely, it'll go on intel only cpu's which will trash your performance (but not kill right away like nv) if you use green/red chips. :roll:
     
  14. fbomber

    Newcomer

    Joined:
    Jun 9, 2004
    Messages:
    156
    Likes Received:
    17
  15. Demirug

    Veteran

    Joined:
    Dec 8, 2002
    Messages:
    1,326
    Likes Received:
    69
    I don’t think that they will split single process calls.

    Who say no AFR? I don’t have seen any proof that they can manage modern games by distribute a single frame to multiple GPUs and merge the results.
     
  16. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    It won't be so useful of different calls go to different chips. Programmers do that themselves anyway. At any rate, gpu's need to gain the device-to-device communication pronto. Right now, all the traffic has to go via the cpu memory.
     
  17. Demirug

    Veteran

    Joined:
    Dec 8, 2002
    Messages:
    1,326
    Likes Received:
    69
    I know, but let’s take a look at the Direct3D Compute shader interface. The process call is Dispatch which takes the dimension of the group cube. A shader can get the position in the group cube as parameter. If you want to split the cube to multiple chips you will run into the problem that at least one chip get’s wrong positions. You need to add additional instructions to the shader. This isn’t that easy as DirectX shaders are signed since DX10. But this is the easier problem. Shader Model 5 allows random read write access to the memory. Currently I cannot see how shaders that make use of this can be safe split to multiple GPUs without explicit hardware support. And even in this case any third party solution can only call functions that are supported by public accessible APIs.
     
  18. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    I dont see much advantage at all. The only real advantage over SLI/Crossfire is the ability to keep your old card for added performance.

    But even that is dubious, as typically, for example in my case, I ebay my old card upon buying a new one. So it would probably be a hypothetical question like, keep my old card with Hydra and get 30-50% more performance, or sell it and buy a 30-50% better new card in the first place with the proceeds? The second option would obviously have huge benefits in power consumption, and doesn't require a motherboard with multiple PCI-E slots either (as my current motherboard is).

    There's also the issue of DX revisions, I assume pairing a DX 11 card with a DX 10 card would downgrade the whole setup to DX 10? Again major downgrade, and would severely limit the appeal of pairing older cards with new.

    Would no AFR mean no SLI/CF input lag though? That would be nice, but I doubt many people would actually care.
     
  19. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Techreport is a bit less joyous.

     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...