ELSA hints GT206 and GT212

Discussion in 'Architecture and Products' started by AnarchX, Sep 9, 2008.

  1. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    It seems 40nm has already been delayed. 65nm was a bit sickly at birth for the IHVs - it seems to have affected NVidia more.

    Why have 65nm/55nm apparently given NVidia so much grief, comparatively speaking?

    So you're saying that AMD is maybe using a library of regularised, "guaranteed compatible" structures as opposed to the "semi- or fully-custom" design that NVidia is using?

    Jawed
     
  2. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
  3. suryad

    Veteran

    Joined:
    Aug 20, 2004
    Messages:
    2,479
    Likes Received:
    16
  4. igg

    igg
    Newcomer

    Joined:
    May 16, 2008
    Messages:
    63
    Likes Received:
    0
    Based on the inofficial roadmap I think the GTX260 (maybe GT206->GTX270) should be the best bang for the buck until they introduce GT300.

    Lets hope they'll announce GTX270/290 soon :)
     
  5. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    thanks for that jawed.

    That seems to make a lot of sense. Arun broadly agreed with it as well. Though I am surprised that gt206 has been delayed so much.

    As an aside, I wonder what other folks at these forums feel, but I think that AMD's approach here has been vindicated. One chip and they can launch 4 cards from $200 to top. (if they wanted to release 4850x2 with gddr5 that is) plus the yield salvage 4830. It could be a one off thing (as gt200 was delayed), but for 4-5 months nv has had nothing to hide behind and AMD's entire line up(next gen that is) from v cheap to v costly, is out in the market.
     
  6. Cookie Monster

    Newcomer

    Joined:
    Sep 12, 2008
    Messages:
    167
    Likes Received:
    8
    Location:
    Down Under
    So many refreshes.

    Possibly the GT216 is the 40nm version of GT206. What is GT212 then? possibly a 192SP/384bit (or 256bit) mid range card to replace all the "filler" G92b based cards?

    Its strange that nothing has been leaked from the green camp.
     
  7. igg

    igg
    Newcomer

    Joined:
    May 16, 2008
    Messages:
    63
    Likes Received:
    0
  8. Oushi

    Newcomer

    Joined:
    Nov 14, 2005
    Messages:
    34
    Likes Received:
    1
    Location:
    EG
  9. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    And that 'leak' is too false. (note that I'm not contesting the timeframes; delays can happen, but AFAIK that's clearly NOT their current roadmap, and at least one of GT212 or GT216 doesn't have GDDR5. Everything in that leak is so fishy it's ridiculous!)
    igg: March should be GT216 if I'm right, see above: "3T|120A|3R -> 0.6TFlops+ -> GT216/Late March"
     
  10. igg

    igg
    Newcomer

    Joined:
    May 16, 2008
    Messages:
    63
    Likes Received:
    0
    0,6 TF, this should be the performance/mainstream chip, right?
     
  11. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Yes, I presume the idea would be to replace G94 with G92-like performance (probably better in some ALU-limited cases, worse in others especially with AA/AF). If those specs are right that is; either way, it won't be ultra-high-end, that at least I know for sure from more than one source :)
     
  12. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Oh, forget about it - didn't look at the date of that posting. Since it's >10 days old, pls disregard the following.
    [strike]Two things in your very own evidence might contradict that: First, the very low memory bw, indicating the use of 800 MHz GDDR3 and an additional disable Quad-ROP/Rop-Partition whatever you prefer.

    Second, the Tesla C1060, featuring a fully armed and operational battle station... the force runs equally strong with this one... *shush darth, shut up*. What I mean is, that this produkt is also quite a bit lower specced than the corresponding desktop part GTX280 (<200 vs 236 Watts), although it's carrying the same amount of functional units, as opposed to quadro CX.[/strike]
     
    #272 CarstenS, Oct 31, 2008
    Last edited by a moderator: Oct 31, 2008
  13. Domell

    Newcomer

    Joined:
    Oct 17, 2004
    Messages:
    247
    Likes Received:
    0
    But there is NO GT214 and GT218 on their roadmap at now. There are only GT212 and GT216 which are supposed to be released next year (most likely Q2).
    ...and doesn`t possible GT212 specs sounds more sensible when we take something like this -
    96TMU/384SP/32ROP or 80TMU/320SP/32ROP? I think 480SP or 512SP isn`t good choice. 384SP or 320SP should be enough when clocks will be about 2Ghz. I think in 40nmit`s possible.
    Another way is not increase numbers of TMUs and ROPs but significantly increase number of Shader Processors.
     
  14. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Anyone any idea what kind of shrinkage NVidia will get with 40nm process, when compared against either 65nm or 55nm?

    The reason I ask is that Arun thinks that NVidia is not squishing features as closely as possible - preferring to space them out. If that methodology is kept for 40nm, what kind of shrink will occur?

    Another thing is that Windows 7 makes D3D10.1 a first class citizen for the desktop UI. Does this increase the likelihood that NVidia will be introducing a top-to-bottom 10.1 line-up before the D3D11 cards arrive?

    Jawed
     
  15. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    The S1070 also has 800MHz GDDR3, but it really uses the same memory chips as the GTX 280; it's just clocked down to improve reliability. Sorry for not disregarding that point, couldn't just let it pass! ;)

    GT212/GT214/GT216 all very much do exist and are on NVIDIA's roadmap. GT218, I haven't heard in a while but I wouldn't really expect it before the others anyway. Surely you don't think the public leaks always perfectly represent NVIDIA's internal roadmap?

    Isn't that exactly what I proposed? :grin:
    G94: 32 TMUs, 16 ROPs, 64 SPs
    G92: 64 TMUs, 16 ROPs, 128 SPs
    GT214: 24 TMUs, 12 ROPs, 120 SPs

    2x compared to 55nm, excluding non-digital stuff which should shrink very little, is a fair bet. That's assuming a slight increase in transistors/mm² in addition to the process' natural shrink (so as to compensate the lower SRAM shrink), which seems like a reasonable bet to make in my mind. Of course, if the feature set/arch isn't 100% the same or if they optimized noticeably more for power (as I've suggest everyone *should* do on 40nm) it's harder to estimate.

    Honestly, I'm more interested in the kinds of clocks they could achieve. Obviously the 90->65/55 transition for NV has been awful both in terms of density *and* performance, so if we assume some of those are fixable internal issues and 40nm allows for higher-than-traditional improvements at the same time (although who knows at what cost), it could get interesting. Not that the same (i.e. interesting) isn't true for AMD also, of course! :)

    Just to be clear, it's certainly not the only factor; I'm just arguing it's very likely to be one of them. Whether the voluntary desire to have that is the largest one or the smallest one, who knows!

    I've heard the possibility that GT21x is D3D10.1 a few times, but who knows, there's enough FUD flying around that I don't think it makes a lot of sense to speculate about it at this point.
     
  16. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    no D3D 10.1 would be the safer bet? as we are talking about another generation of G80 derivatives.
     
  17. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Really - not even the option for lower voltage's being used?
     
  18. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    I really don't know, but I would *presume* it to be a combination of both; lower voltage than max in order to improve lifetime, and lower clocks for that given voltage in order to improve reliability.
     
  19. bowman

    Newcomer

    Joined:
    Apr 24, 2008
    Messages:
    141
    Likes Received:
    0
    DDR3 is catching up to GDDR3 now - seeing as they are sacrificing bandwidth for reliability anyway, why not roll GPUs with DDR3 IMCs that support ECC for Tesla? Seems to me that's one of the biggest complaints to GPGPU, no ECC, GPUs are intended for 'sloppy' operations (real-time rasterization) etc.

    Wouldn't this open it up to a bigger market?
     
  20. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Allison Klein in her PDC presentation very briefly said that it's D3D10 not 10.1.

    Jawed
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...