NVIDIA GF100 & Friends speculation

Discussion in 'Architecture and Products' started by Arty, Oct 1, 2009.

  1. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,985
    Likes Received:
    1,497
    This is what i could find also.

    It also requires a SLI to run either of them. Not just for performance issues but because of display ports. So the nvidia option will be much more expensive than the ati verison sadly. a 5770 isn't the greatest for eyeinfity but it does only cost $180 so if you play older titles and want enhanced viewing its pretty cheap. Nvidia's solution will most likely start at $800 (assuming $400 for the base gf100 card)

    Hopefully in its next tech nvidia will set it up so it will work on a single card.
     
  2. jimmyjames123

    Regular

    Joined:
    Apr 14, 2004
    Messages:
    810
    Likes Received:
    3
    Yes, that is true. Most likely NVIDIA figured that gaming on 3 monitors caters to a very niche market right now, and this type of gamer would be a very likely candidate for SLI anyway due to need/desire for more graphics horsepower, so they didn't sweat too much about it for the first gen of GF100-based cards.
     
  3. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,985
    Likes Received:
    1,497
    Sadly for nvidia there isn't much a 5870 can't play with eye infinty with 1920x1200 monitors or lower. The cost of higher res monitors will most likely keep higher res's away from the mainstream multimonitor users.

    I plan on going eyeinfinty with just a 5870 or what replaces it this fall.


    But personaly i've tried sli , crossfire , dual gpu cards and I'm done with all that.
     
  4. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    745
    Likes Received:
    39
    Location:
    Copenhagen
    So 3.2ghz i7 (and probably not with slowest possible mem/uncore settings) - felix, how much did you clock? FC2 is somewhat cpu dependent after all.
     
  5. Silus

    Banned

    Joined:
    Nov 17, 2009
    Messages:
    375
    Likes Received:
    0
    Location:
    Portugal
    Just like Tech-Report's Core i7 965 Extreme:

    http://techreport.com/articles.x/17986/6
     
  6. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    745
    Likes Received:
    39
    Location:
    Copenhagen
    ..Which I think we concluded is running a different test (another reason could be older drivers ofcourse).
     
  7. air_ii

    Newcomer

    Joined:
    May 2, 2007
    Messages:
    134
    Likes Received:
    0
    So, this is second NV release about GF100 and we still don't know anything. I don't know, but if I had a killer product, I'd spread benchmarks left and right to let people know what they should be waiting for... Unless the product itself (as opposed to the architecture) is a bit underwhelming. It kinda reminds me of R600, when people were fed with info on its horsepower (in terms of GFLOPS) and how it stacked up against G80, some benchmarks saying how wonderfully it did against G80 and then we had a flop (performance wise)...

    I'm not a fanboy (altough I have owned more ATI than nVidia cards) and I do hope that Fermi is a success, as it promises some fancy features and performance, however all this does not quite add up to me...
     
  8. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,585
    ATi had a killer product with HD5870 and I didn't see any benches 2 months before launch .
    I think they are still finalizing clocks and drivers .

    There are some benches at hardwarechunks :
    http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-13.html


    http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-14.html
     
  9. Vincent

    Newcomer

    Joined:
    May 28, 2007
    Messages:
    235
    Likes Received:
    0
    Location:
    London

    http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-14.html

    Naturally the cards we benchmarked weren’t equipped with anything above 512SPs since that is the maximum layout this architecture will allow. If we assume the performance we saw was coming out of the beta, underclocked version of a 512SP GF100 running alpha-stage drivers, this is going to be one hell of a graphics card. On the other hand, if NVIDIA was using 448SP equipped cards for these tests, the true potential of the GF100 is simply mind-boggling. Coupled with the compute power and architecture specifically designed for the rigors of a DX11 environment, it could be a gamer’s wet dream come true.
     
  10. fellix

    fellix Hey, You!
    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,490
    Likes Received:
    400
    Location:
    Varna, Bulgaria
    I asked a friend of mine to do the benchmark on his rig, as I don't have an i7 setup to match my HD5870 with. ;)
     
  11. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,322
    Likes Received:
    1,120
    Even a overclocked refresh for AMD will have little chance against this card it appears.

    Looks like the battle will be fought on price again, where AMD should be in good position to compete. The GF100's performance seems great, but say, $520 will put it in a small niche, and the 5870 can still thrive and AMD has tons of room to cut the 5870 price massive amounts. Probably sub $300 5870's would be no problem for AMD.

    So I guess a lot comes down to what the 3/4 Fermi looks like, die size, etc wise. (if there is to be a 3/4 Fermi die). And half die Fermi as well. Seems with tweaks, maybe 1/2 die Fermi could give 5870 some performance problems (then again, surely AMD can tweak back aka 5890 etc). Then again with all Nvidia's issues getting anything out the door, we're along way from that. I dont know, if 1/2 die Fermi doesn't compete with 5870 straight up, and it kind of seems like it wouldn't judging by avalaible benches, Nvidia just left such a massive hole in it's lineup at sub-high end.
     
    #691 Rangers, Jan 18, 2010
    Last edited by a moderator: Jan 18, 2010
  12. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,015
    Likes Received:
    112
    I think anand misread that. It specifically says a texture unit can do 1 texture address and fetch 4 texture samples. But you need 4 texture samples for one bilinear filtered texture fetch, hence that's really the same rate as they always had. The difference though is now that they can take 4 individual samples for gather4 and return that, something older chips (from nvidia - amd could do that for a long time already) couldn't do. It is also possible efficiency was boosted otherwise, IIRC all (g80 and up) nvidia quite failed to reach their peak texture rate. Still, 64 units doesn't look like a lot - if you put that in terms of alu:tex ratio it is quite comparable to what AMD has, however.
     
  13. air_ii

    Newcomer

    Joined:
    May 2, 2007
    Messages:
    134
    Likes Received:
    0
    The situation was a bit different, as ATI was not pressured by NV's recent product launch.

    As to the benchmarks, I had similar (67 avg) results on a 3.2 quad core. Although I doubt i7 would add very much to it. These results are indeed promising, but let me remain sceptical for the moment ;).
     
  14. PSU-failure

    Newcomer

    Joined:
    May 3, 2007
    Messages:
    249
    Likes Received:
    0
    They were not in such a bad situation as NV right now with highly underperforming products all over the place.

    The "conclusion" slide NV just released simply is ridiculous with "up to 2 times GT200 performance at 8xAA high res", quite the best case scenario since GT200 is nowhere near Cypress with such settings, which many NV lovers here were pointing as totally irrelevant a few hours ago.

    I'm quite interested in the way they implemented triangle setup, but I'm quite skeptical that will be a determining factor performance wise, even with quite heavy tessellation unless the engine is a plain pile of shit.

    Btw, it seems tessellation needed to be included in the GPC low clock rather than ROP domain clock for some reason as they will probably be quite similar.
     
    #694 PSU-failure, Jan 18, 2010
    Last edited by a moderator: Jan 18, 2010
  15. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    Triangle setup is pretty important when we are looking anything past oh around 700k polys per screens on cards out right now, it actually starts to become a bottleneck, and when you get to 1 million + per screen becomes a predominant bottleneck. This is why the HD5xxxx series has a 20%-35% performance penalty with tessellation, regardless of the res and settings.
     
  16. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,321
    Likes Received:
    29
    Location:
    msk.ru/spb.ru
    It kinda remind me of Conroe when when people were fed with info on its horsepower (in terms of GFLOPS) and how it stacked up against NetBurst , some benchmarks saying how wonderfully it did against NetBurst and then AMD had a flop (performance wise)...
    You're trying too hard, really.
     
  17. PSU-failure

    Newcomer

    Joined:
    May 3, 2007
    Messages:
    249
    Likes Received:
    0
    Tessellation units themselves, but what about a real world scenario?

    Pure tessellation is useless, what makes it powerfull are domain and hull shaders, which are not tied to setup rate. On top of that, there are all the other shaders which work with all the data thrown at them by the tessellation stage, so the higher the tessellation factor, the higher the pressure on the ALUs.

    I really think something went wrong here.
     
    #697 PSU-failure, Jan 18, 2010
    Last edited by a moderator: Jan 18, 2010
  18. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Also from there,

    If they were gonna ship a full part, why not say so? However, the card certainly looks impressive vis-a-vis 5870. How it matches upto it's true competition, 5970 remais to be seen.

    Anyone willing to run the benches for 5970 with a stock Intel i7 960, 6GB of 1600Mhz memory and an ASUS Rampage II Extreme motherboard running Windows 7 ? :smile:
     
  19. PatrickL

    Veteran

    Joined:
    Mar 3, 2003
    Messages:
    1,315
    Likes Received:
    13
    Any review with a real card to make independant benchmarks? Or just benchmarks provided by Nvidia ?
     
  20. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    So according to the hardwarecanucks benchmarks this gf100 is 24%-28% faster than an HD 5870 in far cry 2?

    If we take that as best case scenario it doesnt look all that great, actually it's last year all over again. Is there any reason to assume this isn't best case scenario, ie are nvidia known for demonstrating underpowered parts and choosing unflattering benchmarks?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...