NVIDIA GF100 & Friends speculation

Discussion in 'Architecture and Products' started by Arty, Oct 1, 2009.

  1. XMAN26

    Banned

    Joined:
    Feb 17, 2003
    Messages:
    702
    Likes Received:
    1
    Until such a card comes to be, the comparison is not valid as you are using an OC'd card to compare to a non OC'd card. Its the same screwed up philosophy people used to compare the 4870X2 to the GTX285, hell even the 5870 compared to either the 4870X2 or the GTX295.
     
  2. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,518
    Likes Received:
    8,726
    Location:
    Cleveland
    By that same logic, there should be no comparisons with the GF100/Fermi because it is not commercially available. :roll:
     
  3. XMAN26

    Banned

    Joined:
    Feb 17, 2003
    Messages:
    702
    Likes Received:
    1
    Here's the problem, we Know the GF100 is coming and details are emerging, tomorrow all kinds of things will be known about it. Does anyone here cept for Charlie and his Anti-Nvidia posting, know thing one about ATIs refresh or possible release date?
     
  4. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    Source is the same for both numbers - neliz (he posted them at XS). No idea where he got them from.

    Not sure how that's relevant. Was just pointing out to PSU-failure that his assumption that the Extreme gap would be smaller than the Performance gap didn't hold true for the numbers given. Use your judgment as to the reliability of those numbers.
     
  5. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Yeah well, comparing the HD 5870 to Fermi is comparing an available card to one that isn't. I maintain that the comparison with the "hypothetical" 5890 makes sense.
     
  6. Broken Hope

    Regular

    Joined:
    Jul 13, 2004
    Messages:
    483
    Likes Received:
    1
    Location:
    England
    Surely that's going to be embarrassing for AMD having Nvidia beat them at their own technology? What's it going to mean for games if dev's start using higher polygon counts which will slow AMD cards down a lot more than Nvidia ones?

    I wonder why AMD didn't do something similar when working on their DX11 card.
     
  7. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    Given the loving an mentoring nature of NV, I would expect them to deal harshly with any info given out, NDA or not. Look at what they did to Zotac over the 'leak' about the 250 to Anandtech. Given that I sent him the emails, and I know where I got it, I can say with certainty that NV's punishment system is rather arbitrary and meant more for keeping 'close friends' in fear of them than keeping info from getting out.

    That said, they suck at keeping secrets, of late AMD is MUCH better. There is more than a little irony there.

    As for them saying something, or not saying something, it doesn't matter. NDA or not, if they do something that embarrasses NV, they will have sh*t land on them. I have seen it a dozen times in the past. In this example, they might just get their samples delayed two weeks 'accidentally', or more likely, their allocation cut to basically zero.

    Gotta love 'partners' that do things like that. Then again, they are still telling AIBs that TSMC55nm shortages are to blame for lack of GT200bs. :)

    -Charlie
     
  8. PSU-failure

    Newcomer

    Joined:
    May 3, 2007
    Messages:
    249
    Likes Received:
    0
    Comparison doesn't make that much sense if the P score comes from NV, as they surely give a best case scenario score... so with PhysX acceleration enabled and the insane CPU score implied.

    With this in mind, P22k for a "GTX380" seems mediocre as that's ~18k on the P graphics score, less than 10% higher than a 5870.

    X15k on the other hand would imply alien technology used everywhere, even more if associated with P22k.
     
  9. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,436
    Likes Received:
    264
    It's because in general games didn't push the triangle limit and resolutions have increased a lot over those years so the focus has been on pixel power. Aside from resolution I suspect this is because of the extra development effort to create a range of models. Maybe the scalability of tessellation will change that. Or EyeFinity and Nvidia's 3D Surround will catch on and pixels/clock will be even more necessary. We'll see...

    I don't know what's going on in that test, but RV670 has a setup rate of 1 tri per clock.
     
  10. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    Perhaps you should consider all metrics before coming to such conclusions. i.e The performance differences being talked about, how do they measure up in terms of die size difference/no of transistors, board powers etc., etc... ;)
     
  11. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    They changed stuff all right, but I am unconvinced it is for the better. GF100 will be the best case, things only scale DOWN from here. With the ATI route, they are fixed, and you have a set target to code against.

    -Charlie
     
  12. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    Not if you count units. :) That said, I think they did, but the real question is whether they did it by enough, especially for derivatives.

    -Charlie
     
  13. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    What would Charlie Demerjian's ideal GPU architecture look like?

    That's an interesting statement. So having the same geometry performance in entry level and flagship parts is now supposed to be a good thing? I think it's going to be hard even for you to downplay what Nvidia seems to have done here.
     
  14. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26

    Most gamers really could care less about how "big" the chip is.
     
  15. John Reynolds

    John Reynolds Ecce homo
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    4,490
    Likes Received:
    262
    Location:
    Westeros
    Not directly, but they do care about cost, and whether or not it will work with their existing system. I'm hoping it's not over $500 for that beast, and that my Corsair 620w PSU can power it.
     
  16. jimmyjames123

    Regular

    Joined:
    Apr 14, 2004
    Messages:
    810
    Likes Received:
    3
    Err, depends on how one spins it, correct? GF100 has ~ 3.1 billion transistors, while the HD 5970 has ~ 4.3 billion transistors. In other words, HD 5970 has ~ 40% MORE transistors than GF100. So who is really more efficient than who? :)
     
  17. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Well Nvidia has shown they are capable of being competitive in cost if the performance delta demands it.
     
  18. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
    Maybe you could explain how their proposed change could anyone affect for worse then?

    I was under the impression that being able to alleviate perceived bottlenecks is what it's all about - an somehow Nvidia seems to have allegedly identified trisetup/tessellation as their main bottleneck whereas AMD went with doubling raster, FLOPS and texturing rates.
     
  19. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    And you wonder why I don't sign them..... :)

    -Charlie
     
  20. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    113
    Location:
    New Zealand
    Most gamers pay their own electricity bills and most of them buy cards or computers with graphics cards without a PCI-E power connector if going by volume. Im sure even enthusiasts are starting to notice an increase in their energy bills. Performance per watt is becoming an even more important metric than performance per mm^2.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...