NVIDIA GF100 & Friends speculation

Discussion in 'Architecture and Products' started by Arty, Oct 1, 2009.

  1. fellix

    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,552
    Likes Received:
    514
    Location:
    Varna, Bulgaria
    Some post-modern GPU-Z art:

    [​IMG]

    :lol:
     
  2. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Higher than expected shader clock, much lower mem clock than 5870.
     
  3. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    At those clocks, 480 has only 10% more bw. :-(
     
  4. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    Surely the die size is not 621mm2, but rather a very misguided estimate...

    The memory speed at 900MHz base clock might possibly be legit, since that is actually possible to read electronically (which die size for obvious reasons is not. ;))
     
  5. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Unless of course, there is something about the minimum voltage required to keep the chip running?

    edit: and :runaway: about everyone discussing fake screenshots.
     
  6. fellix

    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,552
    Likes Received:
    514
    Location:
    Varna, Bulgaria
    Pretty much a possible fit for a 621mm² GF100 die on the 41x41mm BGA substrate. Sure it will be a tight fit for the SMD elements, but I think there would be enough room for them and the IHS contact perimeter:

    [​IMG]
     
  7. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    923
    Likes Received:
    3
    Location:
    Germany

    to spill the beans:

    Yep, Fake also posted on the 3DCenter.de Forum. But there, one forum member, Neocroth quickly found the wrong Device ID. The ID is from a GTX275 Lightning.
     
  8. WeiT.235

    Newcomer

    Joined:
    Nov 21, 2008
    Messages:
    44
    Likes Received:
    0
    Amazing! W1zzard already knew the spec of gtx480 half month ago!!
     
  9. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Hmm I don't know why that would be, but I don't know much about processes so let's say that's the case.

    NVIDIA can still clock down significantly in idle and presumably lower voltage at least a little. Running the GPU at 40% of its maximum clock and 90% of its maximum voltage would make it draw 97.2W, assuming a 300W TDP. Surely, if the cooling system can handle 300W without perforating your eardrum, it should be able to dissipate ~100W fairly silently.
     
  10. Neocroth

    Newcomer

    Joined:
    Mar 6, 2010
    Messages:
    1
    Likes Received:
    0
    Location:
    Salzburg, Austria
    Apart from that, there's also the BIOS number that appeared strange to me and well... if they made it like 600mm², I bet Charlie would be laughing his ass off :wink:
     
  11. ZerazaX

    Regular

    Joined:
    Oct 29, 2007
    Messages:
    280
    Likes Received:
    0
    2560x1600 = 4MP
    1920x1200 * 3 = 6.9MP

    Yeah it's 50% more pixels, but it's a lot closer than people think (people are always shocked to realize that 2560 x 1600 is nearly 2x more pixels than 1920x1200).

    So with extreme textures and AA, it definitely won't hurt
     
  12. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    Actually it's 68% more pixels. Quite a difference. But I agree, there are probably cases where 1GB isn't enough for a 4MP resolution, especially with AA, lots of intermediate render targets and HQ textures in the mix.
     
  13. mao5

    Regular

    Joined:
    Apr 14, 2004
    Messages:
    276
    Likes Received:
    5
  14. SimBy

    Regular

    Joined:
    Jun 21, 2008
    Messages:
    700
    Likes Received:
    391
    Not sure what is this about? Guy is trying to prove HD5870 is faster than GTX480 without tessellation? Kinda pointless not using tessellation in a tessellation benchmark ;)
     
    #2874 SimBy, Mar 6, 2010
    Last edited by a moderator: Mar 6, 2010
  15. Sontin

    Banned

    Joined:
    Dec 9, 2009
    Messages:
    399
    Likes Received:
    0
  16. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    Dx11 surely does matter, but what do you think the average joe is going to buy when he sees the 5870 trouncing the 480 in games?

    If ATI's tesselation was ahead of it's time 3 years ago, nVidia's tesselation is closer to the time now. But guess what? It's still too early, and at the best case nVidia has put too much into tesselation and not enough into the current demands of gaming.

    The 480 is almost certainly losing to the 5870 in some games, and vs non-reference models, ie 1ghz core versions it is going to get thumped hard in most of the current gaming benchmarks.

    And that, is why we keep seeing Unigene and Far Cry 2 and not much else.
     
  17. WeiT.235

    Newcomer

    Joined:
    Nov 21, 2008
    Messages:
    44
    Likes Received:
    0
  18. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    With tessellation off the engine is still running in D3D11 mode. Not sure why GTX480 isn't any faster than HD5870 running the apparently "slower" version of this benchmark.

    There may not be any difference in performance between versions 1.0 and 1.1, though, in tessellation-off rendering. If the only difference in these versions is the "culling" for tessellation-on mode, then the comparison is valid.

    Of course up-coming reviews will be based on different drivers for both cards.

    Still, it seems reasonable to expect GTX480 to be faster here.

    Jawed
     
  19. Andrew Lauritzen

    Andrew Lauritzen Moderator
    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,632
    Likes Received:
    1,250
    Location:
    British Columbia, Canada
    Ok let's get one thing straight here... DX11 != tessellation! There's a lot more in DX11 than just tessellation... in fact I would say DirectCompute is a lot more important and will be way more commonly used than tessellation.

    Also I have my doubts as to whether that benchmark really represents a "good"/typical tessellation workload. If you look at the wireframe it leaves a lot to be desired in terms of consistent triangle sizes, adaptive and distance-based LOD, etc. FWIW in the only games that I know to date to include tessellation (Dirt 2 and AvP) there is very little performance hit (if any) to enabling it on the 5870.

    That said, it's great that NVIDIA's tessellation performance looks to be that good, but I hope they didn't sacrifice anything in other areas for it.
     
  20. Sontin

    Banned

    Joined:
    Dec 9, 2009
    Messages:
    399
    Likes Received:
    0
    That's a good question. I know that a GTX285 gets 45 FPS with DX10 at the same position. So the GTX480 would be only 25% faster with DX11 and without Tessellation...

    Yes yes. But we know nothing about GF100's DC performance. All this DX11 downplaying is a result of GF100 tessellation performance.

    AMD has the only tessellation hardware for developer in the last 6 months. It seems normal that all games with tessellation show not very much tessellation (AVP or Dirt 2). Metro2033 could be a gamebreaker because of nVidia's help.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...