What is state of consumer multiprocessor comps?

Discussion in 'PC Hardware, Software and Displays' started by Sxotty, Nov 4, 2003.

  1. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,894
    Likes Received:
    344
    Location:
    PA USA
    I was thinking that the prescot is rumored to dissapate over 100watts of heat. Then I thought of VIA, and transmetas super low power low heat processors, it seems to me you could have like 10 of them for the same power consumption, and surely 10 @ 1ghz, would beat 1 @ 3.5 ghz or whatever.

    Just wondering if youguys think that things will eventually go this way or not.
     
  2. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,062
    Likes Received:
    1,021
    You are not likely to see multiple dies with external interconnects (ie pins, balls, whatever) go mainstream. Packaging costs, sockets, board complexity et cetera ensure that multi-die processors will carry a higher infrastructure price tag than single die ones.

    Even though Apple has equipped their powermacs with dual processors for ages, they haven't extended beyond that. PC manufacturers have been even more reluctant. The Opteron makes multiprocessors both relatively easy and cheap, and allows a variety of MP topologies. Unfortunately, their price tags keep such systems beyond private use, and the desktop versions are crippled in terms of multiprocessing.

    Having a larger number of energy efficient processors instead of one single-thread power burner makes a lot of sense for some environments, and the blade-server market is a perfect example of such a niche where that kind of system thrives.

    Longer term though, on-die multiprocessors seems likely to become the order of the day. The question is rather one of direction. The entire "Cell" project is based around the observation that on classical CPUs more and more resources is spent to increase the execution speed of a single thread, causing more and more of the processor to spend more of its time idle. The direction they propose is to eventually go massively parallell on a single die - that even if some of the processors cannot be fully exploited, the overall efficiency will still scale much better over time than continuing along the current path.

    Intels "hyperthreading" and similar schemes try to exploit unused funtional units to process another thread in parallell - same problem adressed, in a much less radical way. Hyperthreading has an optimum gain for invested control logic - beyond that it makes more sense to put two or more hyperthreaded processors on the same die. IBM is already doing this with the Power5, and putting 4 such dies = 8 CPUs = 16 virtual CPUs on a single MCM.

    The differences between extensions of the classical, and a Cell like approach will grow with the number of processing units. It is a software issue as much as a hardware issue. The Wintel paradigm doesn't even do a particularly good job at the most basic level of parallell processing (vanilla SMP). If you actually want the processors to cooperate more intimately on the same overall problem, you want an architecture and a software environment that is geared towards this. You want more of a supercomputing architecture (use lots of processors to solve a single problem fast), than a server architecture (use lots of processors to solve a large number of independent problems fast).

    Wintel, today, has only moved in the direction of the second. It is much more readily available than the first. Thus, for single users, multiprocessing beyond two or possibly four processors aren't likely to ever bring much benefit. It may be that PCs will never move beyond 2-4 processors on a single die, but will instead invest silicon resources in making these perform as fast as possible, pretty much an extension of the current state of affairs.

    Actually, the architectural benefits/possibilities opened up by on-die multiprocessing might never be enough to move much beyond the single processor Wintel paradigm as far as general use is concerned. I don't see that the bulk of Wintel users make such performance demands that the current model can't continue to serve them for the foreseeable future. Add industry inertia to that, and the future of PCs would seem eminently predictable. No massively parallell systems to be seen on that horizon.
     
  3. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,894
    Likes Received:
    344
    Location:
    PA USA
    Well I do remember the multicore on one chip idea being bandied about. I guess that does make more sense, wasn't intel discussing stacking them or something? Perhaps some of these super efficeint chips could be stacked like that, I guess as always yeilds would plummet though.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...