Nvidia GT300 core: Speculation

Discussion in 'Architecture and Products' started by Shtal, Jul 20, 2008.

Thread Status:
Not open for further replies.
  1. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Well they aren't the only ones from my experience, but you're nearly certainly right that most companies start with a 0. Doesn't mean Charlie is excusable for still not getting it right, though...

    I said you only started the process and stopped in time that you can still send the metal layers to the fabs later (i.e. respin). The chips that'll actually come out of the fab will be A1 (or A12 in NV's case) but that's not really the point from a schedule POV.

    I included 4 weeks for testing+fixing, but yeah, that might be too short. It's hard to get real-world data on that kind of thing sadly...
     
  2. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    In my experience, the pre-tape out verification time for a chip has been going up instead of down. Larger farms make life easier, but they also make designers more wasteful.

    E.g. when you had 2 machines, you only rerun block level simulations for blocks that changed. With 20 machines, you run full regression suites every weekend. With 200, you do them daily, no matter what happened.

    In addition, you may throw more random test at the problem. Or write a few more directed tests. But, in practice, man power is the limiting factor. Not farm capacity.
     
  3. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,479
    Likes Received:
    219
    Location:
    msk.ru/spb.ru
    Is it? I don't think that we should make such claims from GT200 vs. RV770x2 scenario while in every other previous single vs multi GPU scenarios single GPU came out victorious.
    If anything it says a lot about how bad GT200 design is, not that multi-GPU is the only way to go in high-end.

    I've been thinking about this for some time, and it's not that multi-GPU in the high-end is the only way to go, it's that you have to have an answer to what your competition is selling.
    So the best way to answer that AMD mGPU high-end strategy while still maintaining your own big single cores for high-end strategy would be to... launch GT30x middle class GPU first!
    And then you may decide where you're going from there - you may build your own mGPU card to counter AMDs high-end, you may finish that big single GPU card in time to counter AMDs mGPU solution or you even may do both (like right now we have GTX285 and GTX295 at the same time).
    But in any case it just makes so much sence to launch your new middle-end GPU against your competition middle-end GPU first that i wouldn't be surprised at all if that's exactly what NV is doing with GT3xx -- and with GT30x middle-class GPU coming out first you really don't need that GT212 anymore...
     
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
  5. Lukfi

    Regular

    Joined:
    Apr 27, 2008
    Messages:
    423
    Likes Received:
    0
    Location:
    Prague, Czech Republic
    It is my opinion that from now on, multi-GPU solutions will always win over monolithic (assuming similar manufacturing costs of both competing products). The current situation has a lot to do with GT200 being not so good. But generally, performance scales linearly to transistor count, while the function of yields is concave. The only problem of today's multi-GPU solutions is software. Monolithic GPU means performance is guaranteed in any game, multi-GPU means living in uncertainty.

    But maybe this problem can be solved by adding special multi-GPU logic that would make the whole solution work more like a single-GPU system. Yeah that sounds like Hydra... perhaps future solutions will use this "dispatch processor" model? I don't know, but I feel these are areas worth to explore.
    Hmm, that's an interesting point. For AMD, this strategy proved useful. On the other side, people kind of expect a performance bomb, not a middle class.
    Maybe they'll launch a middle-class GPU alongside with a SLI-on-a-stick version that would claim the performance crown and create positive publicity to bolster single-GPU card sales.
     
  6. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Well for that you'd need to keep power in check. It's kinda hard to see nv doing that while making an uber gpu in the first place. Why would they put constraints on themselves. gtx 285/295 is kind of a freak thing since it is not always that you can launch a chip and it's shrink 6 months later.
     
  7. bowman

    Newcomer

    Joined:
    Apr 24, 2008
    Messages:
    141
    Likes Received:
    0
    If we're going to be relying on SLI and CF for high-end boards from now on this one will be the last I'll ever buy.
     
  8. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Shouldn't GT212 have arrived by now?

    How big is GT200b if shrunk to 40nm? Does that count as a mid-range GPU then? Surely, by 2009Q4 GTX285 performance is what we'll be calling "mid-range", so that would give us a good idea of a "mid-range" GT3xx.

    Jawed
     
  9. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,900
    Likes Received:
    2,225
    Location:
    Germany
    With a 200-225 watts chip, you're going to have problems going mGPU on a stick if you want to stay inside the currently accepted power budget (dunno if it's really the max of PCIe-spec) of 300 watts. You either have to clock it down, bin your chips really well for lower voltage etc. pp.

    40nm apparently has made HD4850-perf-levels about 30 percent cheaper perf-wise (given the rumored 80-watt-TDP for HD 4770 and the proposed 110 watts of HD4850).

    If the real next-gen-GPUs aren't going to deliver significantly more bang per watt (which is AMDs primary goal for about 1.5 years now, mind you), I really doubt that mGPU-single-connector-cards are really the way to go for the future.

    Take HD 4870 X2 with a TDP of 289 and a maximum measured power consumption of about 370 watts: make this 70 percent and re-invest the spare power back into performance... you do the math.

    Now, that doesn't apply to real multi-card multi-GPU apparently, but IMO the el-cheapo-solutions for "we have the longest bars errr fasted card" could be over soon.
     
  10. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,853
    Likes Received:
    2,776
    Location:
    Finland
    Though even as 650MHz preview model the 4770 was pretty much matching 4850, and the retail models are apparently clocked 750MHz, so they can apparently punch a bit more than just 4850 to that 80W
     
  11. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    How much of that gain is due to GDDR5, which is, additionally, clocked really low?

    Jawed
     
  12. Lukfi

    Regular

    Joined:
    Apr 27, 2008
    Messages:
    423
    Likes Received:
    0
    Location:
    Prague, Czech Republic
    =>CarstenS: Who says the (hypothetical future) chips ought to have 200 watts TDP by themselves?
     
  13. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,900
    Likes Received:
    2,225
    Location:
    Germany
    If you refer to the test at guru3d, it was most of the time closer to HD4830 than to HD4850 - I guesstimated the increase in clock frequency compared to the final retail model in, but refused to simply apply it a hundred percent due to my suspicion that the HD 4770 is going to be a bit bandwidth starved with only 60 percent more bw than 4670.

    Frankly: I don't know, since I have yet to see a convincing GDDR5-Implementation really delivering on the promise of low power operation. HD4670 proved, that AMDs Powerplay 2.0 can be quite effective, also later partner models of the HD 4850 proved to have quite appealing idle-modes. But HD4870 as well as HD 4890 are not a promising indicator for GDDR5 wrt being very power efficient.,

    Nobody - but if they don't be in the range of 150 - 200 watts, they'll hardly be besting performance-levels of current offerings IMO.
     
  14. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    I thought you were talking about maximum power, not idle power, i.e. 80W versus 110W.

    Jawed
     
  15. Sxotty

    Legend Veteran

    Joined:
    Dec 11, 2002
    Messages:
    5,043
    Likes Received:
    410
    Location:
    PA USA
    Too true. Well unless they can make multi-gpu gaming actually work just as well, instead of super patchy performance that works great in some games and crappy in others.
     
  16. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    I'm secretly hoping that D3D11 makes AFR-multi-GPU break :lol:

    For good.

    Now, hurry up and get them D3D11 games out.

    Jawed
     
  17. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,900
    Likes Received:
    2,225
    Location:
    Germany
    I am/was. But obviously you seem to be under the impression, that GDDR5 is more power efficient in it's current usage. That I doubt - and the high power draw in current implementations even in idle mode, which is apparently due to GDDR5-usage, made me state that I'd like to see a convincing implementation of GDDR5 where it can deliver on it's lower power promises from the theory.
     
  18. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    The idle usage doesn't tell you what the load usage is.

    Jawed
     
  19. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /

    Nice evil wish, Mr. Jawed. I doubt it's gonna happen. After all amd sits on the committee that makes the spec. :-/
     
  20. Novum

    Regular

    Joined:
    Jun 28, 2006
    Messages:
    335
    Likes Received:
    8
    Location:
    Germany
    I don't know exactly yet, but isn't the "multithreaded rendering" stuff in it even helping multi GPU cards in distributing the workload?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...