NVIDIA Fermi: Architecture discussion

Discussion in 'Architecture and Products' started by Rys, Sep 30, 2009.

  1. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    May? No ... February (probably, with some proper high tides and incense burning) .. though "Fermi" might launch earlier (and won't be game, schedule like GT200b)

    As Far as I know, the Unigine engine used in older games always heavily favoured nVidia.
     
    #1001 neliz, Nov 4, 2009
    Last edited by a moderator: Nov 4, 2009
  2. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    May, sounds realistic if you talk about widespread availability.
     
  3. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    So what? I responded to a notion that suggested to raise Unigine's DX11 demo to an unquestionable industry standard. In that regard I don't care which IHV a techdemo favours or not, I'll still won't recognize it any sort of standard and will judge by real game performance and not just a bunch of selected ones that favour IHV A or B.

    What exactly is there so hard to comprehend in that one?
     
  4. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    746
    Likes Received:
    41
    Location:
    Copenhagen
    So does 'heaven' in dx10 mode. Not sure exactly where the limit is, but especially 4870v260 is pretty skewed.
     
  5. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Do you have an example for me?

    I just looked up the list of licensees on uniginge's homepage, but there's not a single game liste, that I've even heard of so far. In fact, most of them are "unnannounced" or "in development".
    http://unigine.com/clients/#games
    Or is there yet another list?
     
  6. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Over at nvnews the "Tropics" demo saw 4870's (x2) perform just above the 9800GTX, even a GTX260 with AA wasn't much slower than a 4870 without it.
     
  7. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,402
    If their previous demos were favouring NVIDIA what makes you think that their new demo (made without NV's DX11 h/w in sight) isn't favouring AMD?
     
  8. PeterT

    Regular

    Joined:
    May 14, 2002
    Messages:
    702
    Likes Received:
    14
    Location:
    Austria
    I think the real question when we finally see the Unigine benchmark running on more than one kind of DX11 hardware will not be which IHV it "favors", but rather whether it's actually benchmarking tessellation performance or rather rasterizer performance for very small polygons (or maybe something entirely different).

    In any case I'm sure it will be interesting.
     
  9. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Sorry, I was under the impression, you were talking about real games, not benchmarking-demos. :)

    For those: I simply don't know, since i regard them as almost as useless as a 3DMark score because you never know who did sponsor this or that demo. Something that's for a lot of people seemingly even a concern when talking about real games - those, where publishers and developers intend to make money with and therefore cannot afford to lock a double-digit percentage out of their potential customer base.

    edit:
    FWIW: Did you try to turn on MSAA in furmark and see how it affects framerates?
     
  10. psolord

    Regular

    Joined:
    Jun 22, 2008
    Messages:
    444
    Likes Received:
    55
    Unfortunately, we can't trust games either. You don't know who sponsored that game too. Let me remind Batman Arkham Asylum fiasco, in which you have to force AA on ATI cards making them far slower.

    Now a new bright example that favors Nvidia cards has surfaced. Borderlands! This industry is funny. I wonder why Microsoft isn't doing something about it?
     
  11. Florin

    Florin Merrily dodgy
    Veteran Subscriber

    Joined:
    Aug 27, 2003
    Messages:
    1,707
    Likes Received:
    345
    Location:
    The colonies
    Microsoft? What's it to them? Why doesn't ATI do something about it?
     
  12. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    They wouldn't have to do much if the developer played it a bit more straight in the first place.
     
  13. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,057
    Likes Received:
    3,114
    Location:
    New York
    Whoa!!

    I think that's the most one egregious display of architecture affinity I've ever seen in a game. :???:

    How do you know it's not an ATI driver issue? They aren't exactly known for great launch day support of new games.
     
  14. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    Maybe GPU's live in an entirely different place, but whatever it is that you're describing is a complete fiction in my world.

    The problems you tends to hit during silicon validation are not the ones that are easy to narrow down with a targeted test. Those are usually found and fixed during module simulations. You will typically rerun your chip-level verification suite on the real silicon, but that's the easy part: they are supposed to pass because they did so in RTL or on some kind of emulation device. The only reason you rerun them is to make sure that basic functionality is sane.

    The things you run into on the bench are hard to find corner cases. Something that hangs every 10 minutes. Or a bit corruption that happens every so often. They are triggered when you run the real applications, system tests that are impossible to run, even on emulation, because it just takes too long. In telecom, this may be a prolonged data transfers that suddenly errors out or a connection that lose sync. In an SOC, some buffers that should not overflow suddenly do or a crossbar that locks up. A video decoder may hang after playing a DVD for 30 minutes. When these things trigger, a complicated dance starts to try to isolate the bugs. It can take days to do so. Very often there is a software fix by setting configuration bits that, e.g., may lower performance just a little and disable a local optimization, but sometimes there is not. These kind of problems are present in every chip, but even if they're not, you need wall clock time to make sure they are not. Did I mention that sometimes you need to work around one such bug first before you can run other system tests?

    And then there's the validation of analog interfaces: making sure that your interface complies with all specifications under all PVT corners takes weeks even if everything goes right. (Corner lots usually arrive a week after the first hot lot, so your imaginary 2 week window has already gone down by half.) You need to verify the slew rates and driving strengths of drivers. Input levels of all comparators. Short term and long term jitter of all PLL's. etc.

    The whole idea that you can do all that quickly and then do a respin in 2 weeks is laughable (and you'd be an idiot to do so anyway: you know there are stones unturned if you do it too quickly.) If everything goes well, 5 weeks is the bare minimum.

    And what about the claim that you can fix logic bugs with just 2 metal layers (the real smoking gun if ever there was one that you really don't know what you're talking about, thank you). This would mean that you're only changing the two upper metal layers out of 7 or 8. Which is surprising because, as I'm sure you're aware, the wire density of M7 and M8 is low. There doesn't happen a lot of interesting stuff at that level, so the chance of fixing anything remotely useful is very, very low.

    You usually get away with not having to touch M1, but in 99% of the cases, you don't even bother looking for a fix that doesn't touch M2. Respin time is 100% a function of the lowest layer you change. If you change M2, it doesn't matter that you also modify M3-M8. The majority of backup wafers are parked before M2 or V1.

    You may or may not have great insider info about tape-out dates and other dirty laundry. It's very entertaining, but please stay out of the kitchen?
     
  15. FUDie

    Regular

    Joined:
    Sep 25, 2002
    Messages:
    581
    Likes Received:
    34
    TWIMTBT - The Way It's Meant To Be Tessellated (TM)

    PM me for info on where to send my royalty check ;)

    -FUDie
     
  16. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,057
    Likes Received:
    3,114
    Location:
    New York
    In all seriousness, won't Nvidia have a bit of catching up to do? It's possible that they're getting builds of DX11 titles in development but surely there aren't any A1 samples in developers' hands currently.....
     
  17. sethk

    Newcomer

    Joined:
    May 1, 2004
    Messages:
    93
    Likes Received:
    1
    As a gamer, I'll be happy enough if Fermi support for existing DX9/10/10.1 titles are solid enough on release. TWIMTBP is a really large program compared to ATI's DX11 list, and it tends to cover a lot more of the AAA release titles, regardless of which tech they're on. While I want to see DX11 titles as much as the next guy, as a gamer I'd rather have solid support for games I actually want to play on day 1 instead of a few select games that I only ever run as benchmarks.

    That said, I would expect Nvidia developer support to rapidly build DX11 support once they ship their DX11 hardware - I just don't think Nvidia needs to be as selective as ATI on supporting just a few titles - they have a lot more developer support resources to spread around.
     
  18. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Techdemos/benchmarks have beyond doubt their own value and deliver valuable data. It shouldn't however influence someone's buying decision more than a long selection of game performance results (and yes hello if the list of games is long enough you can add an equal amount of titles that favor X and an equal amount that favors Y IHV), and in that notion I'm not excluding any benchmark like Futuremark's applications for instance.

    After all I have the awkward tendency to buy a GPU as a mainstream consumer to play games on it and not endlessly masturbate at benchmark results.
     
  19. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Unless theres some fundamentally wrong with their DX11 hardware. I dont really see this as a problem. Specially if its using DX11 specifications.

    I am positive Nvidia will support DX11 in their devrel at the very least.

    Thanks Ail. I really needed that mental image.
     
  20. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Well you and I aren't immun to such sports in the very least. You should start to worry if your palm starts to grow hair :lol:
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...