Tesla 10-Series Analysis [Part 2 & 3]

Discussion in 'GPGPU Technology & Programming' started by B3D News, Jun 30, 2008.

  1. B3D News

    B3D News Beyond3D News
    Regular

    Joined:
    May 18, 2007
    Messages:
    440
    Likes Received:
    1
    In these last two parts of our Tesla coverage, we quickly interview Andy Keane as we look at the adoption & deployment aspects of GPGPU, and then we look into real-world CUDA applications and the related financial and competitive aspects in-depth...

    Read Part 2
    Read Part 3
     
  2. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,003
    Likes Received:
    235
    Location:
    UK
    Well, I guess those parts are just too absurdly long for anyone to want to comment... :) I would definitely be interested in whether people agree with my views for competitive dynamics/financial prospects on Page 6: http://www.beyond3d.com/content/articles/107/6 - That part is inherently subjective and obviously it's all very debatable, but I still thought it was worth publishing given the article's focus. So, I wonder how many of you think I'm right there or completely off-track etc.?
     
  3. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    I think nV is a bit too optimistic here.

    Sure, there are new markets for GPGPU but honestly, it's unrealistic to propose those cusomters to use new hardware. From what i've seen in the financial sector a mere 9600GT is sufficient to do the calculations required for stock/option/trade analysis..

    that's realistically what any end user wants. a specific API running on a legacy system.
     
  4. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    7,946
    Likes Received:
    2,370
    Location:
    Well within 3d
    Can you clarify what numbers are being used to determine the value of the CPU business, such as what segments and devices fall under that umbrella?

    GPUs wouldn't apply to certain segments, and there are possible sources of price compression that will cut the amount of revenue that is transferable from CPUs to GPUs.
    So far, it's been the case that there's at least a 1:4 CPU:GPU ratio or lower, so any forays into GPGPU clustering will bring along a number of CPUs, which will eat into the costs customers will be paying to the GPU portion.
    The other issue is the compression of systems.
    A GPU system can reach similar performance levels on certain tasks that a much more expensive and larger CPU cluster could hit.
    Whatever price differential between a GPU cluster and CPU one is money that has simply evaporated.

    I'm still not clear on whether Nvidia's efforts at making GPGPUs robust enough for all kinds of work are complete.
    Their idea of validating with conservative memory clocks and burn-in is not what some higher-end segments and large clusters will find sufficient, since it only addresses screening of some device failures and does nothing to handle reliability, fault reporting, or transient errors.

    Also, do you have more feelers in the trial markets for these first implementors--the coders and testers who are not being touted by Nvidia?
    Issues with getting GPU code that just works, the relative strength (or weakness) of debugging capabilities, the relative dearth of hardware instrumentation and exception handling, and so on have little effect on peak performance but a large influence on whether an architecture will be deemed stable, trustworthy, and worth the trouble.

    Intel should start letting more news on Larrabee trickle out in the next few quarters, which I'd expect to have a significant chilling effect on Nvidia and AMD's GPGPU efforts.
    Some may simply wait until they can get their hands on a product that can in a pinch be treated like a CPU and could even go without one, if some slides are to be believed.
    While there is no guarantee, I'd suspect that Larrabee's CPU heritage will mean it will be capable of exception handling and the chip will be well-instrumented.

    Larrabee may not lead in performance, but it may turn out to be a more effective architecture if it can leverage the benefits of the extant CPU platforms.
     
    #4 3dilettante, Jul 1, 2008
    Last edited by a moderator: Jul 1, 2008
  5. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,427
    Likes Received:
    257
    Yep. :grin:

    I've bookmarked the articles, but haven't gotten a chance to read them yet.
     
  6. MulciberXP

    Regular

    Joined:
    Oct 7, 2005
    Messages:
    331
    Likes Received:
    7
    ...
     
    #6 MulciberXP, Jul 3, 2008
    Last edited by a moderator: Aug 29, 2014
  7. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,853
    Likes Received:
    722
    Location:
    London
    Brook+ I have no trouble accepting, but Ct?

    Jawed
     
  8. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    91
    Location:
    San Francisco
    While Ct is promising we only got whitepapers about it, AFAIK Intel has not released it to the public.
     
  9. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,853
    Likes Received:
    722
    Location:
    London
    One point of view I've seen is that Ct is a kind of functional programming peg mashed into a procedural hole, and that Intel should have just stuck with NESL.

    I'm really not a languages guy though. I can sympathise with the view that C++ is a mess of objects mashed into procedural too, but I don't feel it in my bones.

    Sadly, in a way, all three of these approaches appear to be API kludges for C. CUDA's "benefit" appears to be that it is the least distant from C and is really just allowing Cg and C to co-exist without all that nasty 3D pipeline state nonsense getting in the way. At the cost of the shenanigans of manipulating data structures for SIMD width, register file width and memory-coalesce width, not to mention the memory hierarchy dance.

    Clearly a bit of marketing, there's this paper on financial modelling with Ct:

    http://techresearch.intel.com/userfiles/en-us/File/terascale/Ct-appnote-option-pricing.pdf

    Irrespective of arguments over the qualities of the language as a language, it's interesting that in a straight race with a compiler (Visual C++ 2005) there's some formidable speed-ups to be had.

    Jawed
     
  10. cho

    cho
    Regular

    Joined:
    Feb 9, 2002
    Messages:
    416
    Likes Received:
    2
    The SDK for Larrabee is "Native SDK", not Ct, Ct is just a research project .
     
  11. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    91
    Location:
    San Francisco
    I heard that, though I wouldn't be suprised if a Larrabee's Ct backend is being developed right now.
     

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...