Intel to make bid for nVidia? **Reuters**

Discussion in 'Graphics and Semiconductor Industry' started by Razor1, Oct 4, 2006.

  1. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    I think many people feel this way. But what I've been pondering as of lately, is how would Intel and AMD justify axing high end GPU development all the while designing high end CPUs?

    I get the impression that the GPU justification is that so few people need a high end GPUs level of performance, but the exact same argument is true for CPUs. After all how many business and office workers would care let alone notice if their C2D or A64 were replaced with a C7? Not many I would hazard, and if I'm not mistaken this is the largest market segment. How come AMD and Intel haven't created a CPUs that are adequate for their users, much like how Intel treats their IGPs as adequate?
     
  2. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,137
    Likes Received:
    2,939
    Location:
    Well within 3d
    If you go by the idea that a GPU is a programmable graphics processor, then most office workers could go without a GPU entirely without noticing.

    I think replacing the CPU with nothing would be a lot more noticeable.

    In other markets, there can be systems that have thousands of processors, but few if any GPUs. If there are a lot of GPUs, there are still a good number of CPUs to give them data to crunch on.

    High-end GPUs need high-end processors to do their job, no matter what. High-end processors can be used without the GPUs.

    If push came to shove, the GPU camp isn't going to trump the CPU group.

    If they share silicon, things might get worse, we'll find out how much worse when they try to port ATI's circuit designs onto AMD's process.
    It'll probably be worse than the trouble AMD had going from bulk Si to SOI.
    If that's the case, a lot of effort will be used up just getting the GPUs to work, forget about high-end.
     
  3. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    If you ignore Vista for a moment, then yeah what is required to draw your typical 2D GUI would be pretty insignificant, but at the same time how much of a CPU would really be required to take user inputs and apply them to the word processor's/email client's/web browser's memory?

    In which case neither is replaceable with nothing. But my point still stands, if this is all that's required, Intel and AMD should be building and selling far more integrated, much simpler, and much cheaper products instead of these clock speed and IPC behemoths that we have today.

    Which makes me think that the only reason we have the CPUs we do today is because high-end products create interest, and interest sells. What is unfortunate, is that they haven't seemed to have realized that the same can be made true for GPUs and this is was ultimately has been holding back IGPs and will probably cause the death of high end GPUs in the case the Nvidia goes out of business or gets bought.

    That would be something interesting to look at, how many GPUs are sold to gamers, versus how many CPUs are sold to the Top500 super computers. And then break GPUs down into single chip and multi-chip/board configurations.

    I honestly haven't the foggiest idea how that comparison would truly look, but I think it would be safe to assume that GPUs are at least as profitable as those super computers are.

    You may notice that I'm intentionally ignoring web servers, this is because they are primarily IO and much like my hypothetical office machine listed above, except now you might want to scale to more cores, ala Niagara.

    Are XBox360 and PS3 examples of the future? I certainly wouldn't consider either of their CPUs particularly high end if compared to a C2D or X2 on current programs.

    What's to say that a GPU won't progress to being programmable enough and with virtualized memory be capable of performing low-end CPU tasks. Just like high-end CPUs today are capable of doing all the rendering a GPU is capable of.

    But going into the future, what type of workloads is one expecting? Will they be TLP and parallel, or do we still need more increases in clock speed and IPC? Personally I say we need both, but from the appearance of everything we will be getting processors (CPU and GPU) that very much firmly fall into the first type of workloads, and the only reason that the CPU will swallow the GPU is because the CPU companies are bigger.
     
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    The cool thing about D3D10 is it makes the GPU much less reliant upon the CPU. Trouble is, it'll be years before games get to take full advantage of that.

    Meanwhile, Aero Glass will carry the flag, lots of swanky visuals with practically zero cpu load.

    Jawed
     
  5. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,436
    Likes Received:
    443
    Location:
    New York
    You mean the 10 billion shareholders already have? I don't think you gain anything if somebody gives you $10B in cash in exchange for $10B in stock :razz:
     
  6. Pete

    Pete Moderate Nuisance
    Moderator Veteran

    Joined:
    Feb 7, 2002
    Messages:
    4,969
    Likes Received:
    375
    I was under the impression that the GF6 IGP has only two pipes, that the GF7 IGP would stay with two pipes but add the extra MADD per pipe and drop everything to 90nm, and that RS690's X700-based (read: SM2b) IGP would have four pipes (not the X700's full eight :roll:). As it is, the GF7-based MCP61 appears to stay in the same class as the similarly two-pipe 6100/6150 and RS485.

    As for the rumor, though I don't think it's likely (given Intel's upcoming 80-ALU CPU, NV's market position and value, egos, etc.), it's got to be worrying someone at ATI-AMD. I'm also not sure how a $10M investment in ImgTec in any way precludes a $10B "investment" in Nvidia.

    Edit: NP, Razor. BTW, confirmation RS690 will have a quad pipe IGP.
     
    #66 Pete, Oct 5, 2006
    Last edited by a moderator: Oct 6, 2006
  7. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    :oops: hmm odd thought they had 4, sorry !

    ah the gf 6100 IGP update had double madd execution like the gf7's thought it was 2 times the pipelines.
     
  8. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,137
    Likes Received:
    2,939
    Location:
    Well within 3d
    The key is that a CPU can be made to run a basic software renderer.
    A GPU can't be made to take user input and drive the system.

    The lower the need for a GPU, the more acceptable it is having the CPU or some basic video hardware to do some rendering.

    There is no corresponding increase in the utility of a GPU if the CPU can do less and less.

    The cost of low-end CPUs is such that there is little difference in price to most buyers if it's a Celeron or Centaur, besides the fact that a bunch IT departments only buy Intel due to the branding.

    We buy newer cores just because Intel's not going to make Pentium Pros on its latest and greatest fabs. It's not worth maintaining that platform when the costs of the units are so low anyway.

    Not exactly, the CPU can struggle along pathetically without a GPU.
    The GPU cannot do the same thing.

    The high end needs more performance, and the high end brings margins. It is cheaper to amortize the development costs of the high end by using it again for the mid and low end, as well as suppressing any upstarts that might try to weasel in from below.

    We haven't run out of a need for performance growth, though Intel and AMD would slow core introductions, if they could be confident the other would do the same.
    The high-cost variants exist as marketing strategies than they do as income sources. It's the less than insane versions that carry well.

    GPUs already use high-end SKUs to generate interest and create price segregation. If Nvidia and ATI and everyone else magically disappeared, then AMD (let's say the merger didn't happen) or Intel would step in, because the primary obstacle to their entry is the great lead in expertise that Nvidia has on its own turf.

    The Top500 aren't the only consumers of large numbers of processors. The entire low to mid-end server market is a heavy user of CPUs.

    The top supercomputers often get discounted rates, direct profits aren't the only concern in that market.

    Not every server only has lightweight threads. Niagra's niche is a bit broader than some thought it would be, but not that broad.

    I don't think you can put CELL in the low-end category, and Xenon isn't entirely that bad.

    There's no reason why not, but is it really a GPU or just a CPU that's good at graphics? If it becomes the central processing unit of the system, the GPU moniker would be even more meaningless than it is now.

    Neither type is going to stagnate entirely with IPC and clock speed. There will always be a need for single-threaded performance, if only to fully utilize the parallel units in less than ideal conditions.

    If CPU companies swallow up the GPU companies, it's in no small part because the market for CPUs is just bigger, and the fact that GPUs need CPUs far more than the other way around.
     
  9. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    In reality you are absolutely correct.

    The general point I was trying to make is that your average user uses so little processing power that the CPUs we have today make little sense. This is also the reasoning that the CPU manufacturers would likely use to axe high-end GPU development. And in both of those cases it would make very little sense to put much resources into either.

    But in reality Intel and AMD are building very nice CPUs because some people need this performance and they are willing to pay for it. The same is true for GPUs, yet who ever is in charge of these types of decisions at Intel and AMD doesn't seem to realize this and they try to keep the GPU as small and low-end as possible.

    Where I was trying to lead that thought was that Intel keeps their IGP small, since no one 'needs' that performance, to minimise cost, and maximise profit. The same could be done for CPUs, they could have stayed with a Pentium classic, kept shrinking the die, ramping the clock speed, adding cache and features until they have a SoC. After all no one 'needs' that performance in the very same sense no one 'needs' a fast IGP.


    Oh and I do realize that there is a lower bound on how little silicon one would want to use, and that by going with a single high-end design can help cut costs. I just think that the way Intel has treated the GPU is kind of silly when you compare their apparent reasoning for it, with the world in which most of their CPUs live.


    I'll admit I don't know a lot about web serving, but what tasks would a web server be doing that isn't very latency tolerant and doesn't have a lot of concurrent tasks? If you have to send data over the internet isn't that latency going to let you mask the delay from any heavy weight threads run on a CPU like Niagara? I honestly don't know, but it seems like it should.

    I'm not trying to say that they are necesarily low-end or bad, but they made trade-offs that hurt serial general purpose computing very badly. And that is why I tried to qualify my comment as current programs. And if the future goes one way, CELL will go down fondly in the history books as being revolutionary, if the future goes the other way it will go down in the history books right next to Alpha, Itanium, and many others.

    They will probably become on and the same, but I imagine the name will still indicated what something is going to be good at. But what would still differentiate a chip called GPU from one called CPU is a matter of how the internal data paths are configured, what functional units are emphasised, batch sizes, etc...

    After all what's the difference between Conroe and CELL? Would you use them for the same type of tasks? Of course my example here is flawed since both are considered CPUs, but in the future who knows!
     
  10. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
  11. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Let the speculation begin on who the "other players" are. :lol:
     
  12. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    IBM or TSMC. :D
     
  13. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    12,885
    Likes Received:
    9,287
    Location:
    Cleveland
    AMD
     
  14. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Now that they are in debt to the bank due to ATI, it would be more likely the other way around (NV taking over AMD). :D

    I still believe IBM would be a perfect candidate.
    They previously colaborated producing several GPU's (FX 5700 Ultra, etc).
    Both have ties to AMD, both are in the HyperTransport Consortium.
    Both colaborated on PS3.
    Either one of them has an extensive Tech/IP portfolio.
    Intel doesn't trust either one of them, apparently.
    IBM certainly has the big bucks to do this.


    TSMC would be a distant second.
    They already have the Fabs, why not purchase one of their best clients and get a bigger piece of the profits (and cutting one of the middlemen from the chain).
    Most likely Philips (one of TSMC's main shareholders) would be interested too, due to NV's multimedia and mobile/phone graphics chips.
     
    #74 INKster, Oct 7, 2006
    Last edited by a moderator: Oct 7, 2006
  15. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Booooring!

    Now Sony might intrigue Jen-Hsun. . .
     
  16. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    Why same arrogance level :lol:

    I was thinking the same thing actually.
     
  17. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    I think you meant to spell that "charismatic, confident vision of the future". :razz:
     
  18. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    They got paid by Sony under contract.
    That doesn't mean that they don't know it would be a bad deal (and bad PR, NV is good at it :D) to allow themselves to be bought by a company on a downward spiral.

    Jen-Hsun is no fool...:wink:
     
  19. LeStoffer

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,253
    Likes Received:
    13
    Location:
    Land of the 25% VAT
    Spot on, DemoCoder!

    I can understand why AMD wanted ATI, but the other way around? I doubt it with regard to the high end ATI R&D for the next generation. :sad:
     
  20. Farid

    Farid Artist formely known as Vysez
    Veteran Subscriber

    Joined:
    Mar 22, 2004
    Messages:
    3,844
    Likes Received:
    108
    Location:
    Paris, France
    Wait, no one?

    Man, you were the one to doubt the power of Chinese written internet entries, not me!
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...