Article: Tesla 10-Series Analysis [Part 1]

Discussion in 'GPGPU Technology & Programming' started by B3D News, Jun 26, 2008.

  1. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12

    Nvidia came out with a very similar alternative to Microsoft's shading language, at a time when MS was promoting DX9 closely with ATI's GPU's, which, coupled with the fact that Cg it was -and is- the shading language of choice for the PS3 (another competitor to MS/ATI's Xbox 360 hardware) would kinda make it pretty obvious that neither would support it officially, now would it ?

    Nevertheless it does work on ATI GPU's, since Cg wasn't designed to be held on to proprietary hardware, nor it was specified to be DirectX-only, like HLSL is. It also works as an alternative to GLSL (OpenGL's shading language), for instance.


    As for CUDA on "Larrabee", sure it won't be the "preferred" API, but then again, developers (and ultimately Nvidia) stand to benefit a lot from it, as it does simplify development and there's still no alternative with this kind of an installed hardware base (modern x86 CPU's, G8x and above Nvidia GPU's, etc).
    OpenCL is still very much a long-term project, and it will probably take most of CUDA's/Brook cues on future directions for GPGPU development anyway.
     
  2. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Could you please stop talking down to me... I knew all of this already...

    Anyway, I think we will just have to agree to disagree for now.
     
  3. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    You can add 2 or more memory chips to one MC, remember 8600/8500GTs 1GiB with 16 memory chips.:wink:
     
  4. rendezvous

    Regular

    Joined:
    Jun 12, 2002
    Messages:
    347
    Likes Received:
    12
    Location:
    Lund, Sweden
    The question is if the changes for better GPGPU performance made in the GT200 is are worth it.
    The inclusion if e.g. a DP FP unit has not helped rendering performance in games anything at all, and I doubt that it will be utilized in any games with CUDA.

    If NVIDIA focused on only gaming performance we would in no doubt have a chip that would be faster/smaller/cheaper/more efficient. Such a chip would be more competitive against the RV770 in gaming scenarios than the GT200 of today, which would lead to better margins for the product.

    While the Tesla solution gets to free ride on the graphics solution, but that free ride is only free for Tesla given the argument above. Designing a chip for only GPGPU workloads would indeed be very expensive and it would probably be a cost that they couldn't carry today.

    The real question is if what GPGPU gets for free offsets what the graphics product will have to pay for it.
    With the pressure from AMD in the gaming segment I get a feeling that NVIDA lost more than it gained by designing a CPU that would be good for for both workloads.
     
  5. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    I doubt NVIDIA will stick to that.
    I don't see the cycle accurate traps of the exceptions coming anytime soon, even supporting all the rounding modes is probably too much to ask. I expect NVIDIA will include exception flags next time though, which is enough IMO.
     
  6. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    I think there are two key points to understand:
    - Perf/mm² matters much less for Tesla SKUs than perf/watt. Margins, if you considered the chip alone, are extremely high and there's not much to gain with a new chip; as for perf/watt, while the 3D-only stuff is taking power through leakage, it shouldn't be overestimated.
    - The perf/mm² and perf/watt cost of CUDA is necessary in GeForce anyway, and the cost of DP is minimal. If they wanted, they could remove it completely from all non-top-end chips, but they claimed they won't (presumably because, down the road, there might be consumer CUDA apps using DP).

    Honestly, for a top-end chip like the GT200, it's hard to even get back all the operating costs from gross profit unless the market is completely void of competition for an extremely long time (*cough*)... Remember also that a critical advantage of not having a Tesla-only chip is that it allows you to have a several months head start in terms of process nodes. This, along with the fact the market would have to grow immensely for it to make sense, made Andy Keane very skeptical it could ever make sense when I asked him.

    As for the DP implementation, I am ~300% certain that it'll remain identical (just with different ratios) throughout the DX10.x generation. As for the DX11 generation, who the hell knows what they have in store so it's hard to judge which will make the most sense by then. I think I made my arguements pretty clear *for this architecture* in the article though! :)
     
  7. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    The head start in process nodes didn't quite work out ;)
     
  8. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Well, it's a *relative* head start, not an absolute head start of course! :) Anyway, remember there won't be a half-node on 40nm and NVIDIA will be a lot more aggressive on that node...
     
  9. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    I'm not quite following TSMC's decision to bypass 45nm and head stright to 40nm. They had that tech down for a long time, i believe.
    Were they trying to undercut Intel at 45nm, or is it a measure to prevent a wider density gap to Intel's 32nm process by that time ?
     
  10. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    The general-purpose 40nm node was ready by the time they'd have gone to 45nm, and their customers don't like to have to tape-out entire families of chips all over again every 3 months and for no good reason. In the handheld market, there definitely have been a few 45nm low-power chips though.
     
  11. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    I'll try to explain this to you from a different perspective.

    The SMs in GT200 only compose of 26% of the die. You have tons of scheduling logic in there and register files in addition to the 8 SP MAD and one DP MAD. I doubt the DP unit cost more than 10% of the SM's area. That's 2.6% of the die space. That's probably 1.5% of the card's manufacturing cost.

    Does it make sense to make an entirely new GPGPU chip solely to chop a mere 1.5% off the cost of your main one? Especially if you're planning on eventually doing a die shrink where you can chop that part out when you are ready to ramp volume beyond the depleted ultra-high-end market?
     
  12. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,492
    Likes Received:
    979
    Location:
    en.gb.uk
    Well I'm interested to know why this approach strategically is better than sticking to FP32 in GPUs and buying Clearspeed.
     
  13. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Wow, Clearspeed hardware has improved lately.
     
  14. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,492
    Likes Received:
    979
    Location:
    en.gb.uk
    Hehe, yeah sorry that was a rather terse and badly phrased question. I'm not even sure who it was aimed at :) I think what I'm trying to get my head round is whether DP in a GPU purely for GPGPU really makes sense given the performance levels they're achieving, I should probably think about it a bit more before I type any more!
     
  15. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Seriously, I had just not checked them out in a while.
     
  16. Sunday

    Newcomer

    Joined:
    Feb 6, 2002
    Messages:
    194
    Likes Received:
    6
    Location:
    GMT+1
    That famous company that's producing Tesla electric cars has dedicated part of their web page to honor famous scientist which surname they've used to brand their company and their main product.

    http://www.teslamotors.com/learn_more/why_tesla.php

    Did anyone noticed has NVIDIA done same? There's no doubt they've used famous name to mark their product as revolutionary as works of Nikola Tesla!

    I couldn't find any links on NVIDIA site!

    But what's outrageous is that they've trademarked traditional Serbian surname Tesla for their product:


    "With the world’s first teraflop many-core processor, NVIDIA® Tesla™ computing solutions enable the necessary transition to energy efficient parallel computing power."

    For example "Tesla Motors" and "Tesla Roadster" are the trademarks of the car manufacturing company! So it's a combination of words, and not only surname of the scientist.

    Even long time after his death story of his life continues - big company continue to make money on account of his name without giving him any credit!
     
  17. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    It most certainly does. The exclusion of DP would have had little impact on GT200's die size.
     
  18. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    You should see my books on electrical motors: they're full of this kind of abuse!
     
  19. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Since he referenced GPGPU in general, you could probably lose the 512bit memory interface as well.
     
  20. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    Yeah but a 256-bit GDDR5 part definitely wouldn't have been a good idea considering the issues with GDDR5 availability now. Imagine if Nvidia cards also needed it?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...