Anandtech: AMD-ATI Merger in the Works?

Discussion in 'Graphics and Semiconductor Industry' started by Farid, May 31, 2006.

  1. chavvdarrr

    Veteran

    Joined:
    Feb 25, 2003
    Messages:
    1,165
    Likes Received:
    34
    Location:
    Sofia, BG
    perhaps AMD wants to buy ATi, then shutdown production of chipsets for Intel CPU ?

    OMG, that will be soooo mean :D

    Don't forget that AMD has contract with Chartered Semi for producing CPUs, and first batches of these should be out this month AFAIK.

    But buying ATi is just a rumour.
     
    #41 chavvdarrr, Jun 1, 2006
    Last edited by a moderator: Jun 1, 2006
  2. IgnorancePersonified

    Regular

    Joined:
    Apr 12, 2004
    Messages:
    778
    Likes Received:
    18
    Location:
    Sunny Canberra
    AMD and SiS makes more sense. Going off the Vista argument above - reasonable IG and solid chipsets/perhiperals and a much lower $ figure.... but then again ... I really doubt AMD is going to upset the current healthy platform support is has generated with 'partners' for it's CPU's.
     
  3. Hellbinder

    Banned

    Joined:
    Feb 8, 2002
    Messages:
    1,444
    Likes Received:
    12
  4. MulciberXP

    Regular

    Joined:
    Oct 7, 2005
    Messages:
    331
    Likes Received:
    7
    I know what it is. Remember all the talk from Jen-Hsun about nVidia and GPUs eventually negating the need for CPUs? Well AMD sees the writing on the wall. They cant afford nVidia who is on very solid financial ground, but maybe they can afford ATI. This way, in 2010 or later they'll already have a working GPU that can run Windows! :lol:
     
  5. _xxx_

    Banned

    Joined:
    Aug 3, 2004
    Messages:
    5,008
    Likes Received:
    86
    Location:
    Stuttgart, Germany
    Maybe nV could buy ATI? :razz:

    And AMD will go for ImgTech to undercut Intel on the IGP side. Simon, got any calls from AMD lately? :twisted:
     
  6. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    Oh..that's so reverendish!
     
    Tim Murray, MuFu, Geo and 1 other person like this.
  7. karlotta

    karlotta pifft
    Veteran

    Joined:
    Jun 7, 2003
    Messages:
    1,292
    Likes Received:
    10
    Location:
    oregon
    so AMD is gaining TSMC share.
    AMD has always been reluctent to make SB , or any chipsets . Then pulling NVDA+Ali type move on Intel thus costing Intel a good fall back partner in ATI, just as chipzilla is about to throw out 16,000.... due to this merger? I dont think D.O would go for a hostile play.With r600 products already contracted to TSMC, and a fair bit at UMC. So what, AMD still licences out for Nforce, N5 is already set for AM2... This whole relationship seems Googlesq.
     
  8. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,493
    Likes Received:
    474
    I hardly think adding an integrated CPU/GPU to the product mix will screw up the supply chain. If it doesn't happen it will be for technical and market reasons. You might remember that a few years ago Intel thought this integration was a good idea and they designed a chip. Not sure of the name (Timna?). At some point the timing might be right for certain markets so I'd say it is worth considering periodically.
     
  9. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    Aaah, no!

    This would be.


    :wink:

    BTW I like the new word, somebody get to the wiki.
     
    digitalwanderer likes this.
  10. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Amdahl's law is based upon an entirely wrong assumption. There is no upper limit on parallelism in software because when you are writing software for hardware that allows for more parallelism, you allow it to process more, instead of just attempting to execute the same code as you would on a single-threaded system.

    And furthermore, the idea that there is some part of the program that needs to be run in serial only applies when starting and ending a program. Within a game, for example, you can make use of pipelining so that there is never any part of the game code which, during play, must be run in serial with everything else.

    Not necessarily. This could be a good solution for low-end graphics applications (the memory bandwidth available wouldn't allow a high-end solution). In other words, such a setup could be a good product for sale in the market currently occupied by the Celeron and the Semperon. It would be a bit more expensive than these products, clearly, but may make sense if it provided lower total system cost.

    Note that a reasonable amount of 3D acceleration will soon be necessary to run Windows Vista.
     
  11. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    On topic, though, I'd like to say that I don't think a merger between ATI and AMD would be good for ATI's GPU's. Along with the merger would also come increased management overhead, and where ATI is now a nimble company who can make their own decisions, their product moves would now have to be filtered through AMD management, if only to obtain fab access. I find it rather likely that ATI would get the short end of the stick when it came to fab resources in the event of a merger.
     
  12. Kombatant

    Regular

    Joined:
    May 29, 2003
    Messages:
    639
    Likes Received:
    19
    Location:
    Milton Keynes, UK
    AMD buying an Intel-fanboi company... now that would be something... :lol:
     
  13. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    I think this is a key observation.

    Does anyone honestly think AMD won't be screwed in the coming decade? It took over 5 of performance and value leadership (aside from a little slip just before A64 was released) to get the market to even notice AMD. The P4 was a hot, expensive, and underperforming.

    Now that Intel has Conroe coming out with likely superior performance to AMD, what are their chances? If Intel can dominate AMD with a poorer product, what will they do with a superior one?

    The only issue is I think CPU margins are quite superior to GPU margins for the same size chip. It would be interesting if AMD could leverage their high clockspeed technology and know-how into the GPU market.
     
  14. rwolf

    rwolf Rock Star
    Regular

    Joined:
    Oct 25, 2002
    Messages:
    968
    Likes Received:
    54
    Location:
    Canada
    Perhaps the ATI and AMD are not merging, but are merely colaborating on a future product.
     
  15. IgnorancePersonified

    Regular

    Joined:
    Apr 12, 2004
    Messages:
    778
    Likes Received:
    18
    Location:
    Sunny Canberra
    Not everyone wants an Intel only x86 market. Intel lost a lot of faith in those 5 years with AMD being seen now as a serious provider of server equipment. Last year I couldn't order an AMD based system via our internal purchasing channels. This year a roughly 1000 unit server refresh looks to be predominately using AMD processors. Quite a difference.
     
  16. asicnewbie

    Newcomer

    Joined:
    Jun 29, 2002
    Messages:
    116
    Likes Received:
    3
    I personally don't see this happening for two key reasons. First and foremost, CPUs and GPUs follow radically different physical-design strategies. CPUs rely heavily on exotic transistor topologies and circuit design strategies to achieve their design goals (whether it's power-consumption, area, or clock-freq.) GPUs are much closer to traditional cell-based (i.e. standard-cell.) Despite advances in EDA tools, the CPU design-cycle is constrained by labor-intensive manual layouts. Furthermore, CPU-manufacturers have direct control and visibility into their own fab-lines, giving them an 'edge' in terms of attacking bleeding-edge manufacturing and semiconductor-design issues. For the CPU-vendor, direct process control mitigates some of the risk inherent to exotic ("L33T":)) circuit-design.

    The second obstacle is product scheduling. GPUs don't have the same product-lifespan as CPU-lines. NVidia and ATI also maintain hectic release schedules (with several variations on a core GPU-architeture.) While they share high-level architecture heritage, from a physical/layout perspective, I'd guess they're essentially all-new layouts. (I.e., minimal re-use between NV40/NV43/NV44, etc.) Compare this with the CPU-world, where there's an alarming habit of doing an all-layer (mask set) change, with few or NO functional changes, simply to improve manufacturability.

    Having said that, modern GPUs and modern CPUs are probably moving 'closer together' (from a physical design perspective.) Speed-critical portions of a GPU do receive manual (hand-layout) optimization. Non speed-critical portions of a CPU are doable with a standard-cell flow. One of Intel's past papers stated that the Pentium/4 was the first Intel CPU where CBA (cell-based automation) tools generated >50% of the CPU's logic die-area. So who knows what the future will bring.

    Intel's GMA (integrated graphics core) has evolved at a comparatively slow (i.e. glacial!) pace. If it weren't for the upcoming GMA965 (with its radically revamped 3D-pipeline), the GMA9xx would have been the most-likely candidate for a hand-layout optimization.
     
    Simon F likes this.
  17. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    Yeah, you're right asicnewbie, I just thought that maybe something could come out of it. Another big reason that it probably won't happen is that if you triple the clock speed, you need 3 times the register space to absorb latency from texture accesses. That's a big chunk of silicon right there. Your math logic occupies more space also in order for it to run that fast (probably more stages, may need more room to avoid interference, etc). In the end you'll probably have to reduce the pipeline count to increase the clockspeed, thus partially negating the advantage of such a venture.

    There are a few reasons I think it may be possible though:
    • With a merger, ATI would have direct control and visibility into their own fab-lines
    • GPU features will probably reach a relative standstill soon. DX10 seems to be forward looking enough that software will need years to catch up (it's already way behind). This makes a 3-year long-term project feasible.
    • GPUs are insanely parallel, so you probably only have to hand-optimize a small portion. You could keep everything like the scheduler, triangle setup, rasterizer, etc the same. By just optimizing the shader unit, texture units, and blending units to run at 4x the normal speed, you could probably use one fourth the units.
    • With the die space you could save, why not? Money makes everything happen...
    • I think AMD and Intel have become very good at making fast, compact cache structures, and they're probably a lot better than what we see in GPUs. This could help GPU design a lot.

    I still think this rumour is false, but I'm just saying it would be rather interesting to see what they come up with.

    Another big factor is ATI's success in consumer devices. This could be the diversity that AMD is looking for.
     
  18. tmp

    tmp
    Newcomer

    Joined:
    Jul 3, 2004
    Messages:
    10
    Likes Received:
    1
    A few cooperation points.

    AMD is pushing for x86 everywhere in embedded space with its Geode series. They are going to need some kind of 3D accellerator soonish. Preferably one with good driver compatibility with PC accellerators.

    AMD CPUs have memory controller on die. IGP likes to be near the memory controller. A Sempron with on die graphics doesn't sound completely insane.

    AMD is working with Chartered with 65nm SOI CPU manufacturing. AMD is also working with ISI to validate the Z-RAM technology. If Z-RAM works(even nearly) as advertised, 65nm SOI Xenos with Z-RAM would probably be cheaper and have much lower power consumption than the current model. Actually, if Z-RAM works as advertised, any new high end GPU design utilizing it would likely A) be very fast and B) have very low power consumption.
     
    Geo likes this.
  19. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Hrrm. I wonder how that license is written. I don't think I've heard anyone else address whether that tech is suitable for gpus.
     
  20. asicnewbie

    Newcomer

    Joined:
    Jun 29, 2002
    Messages:
    116
    Likes Received:
    3
    AMD could always license an Imgtec SGX core, just as Intel has already done. The licensing route would represent a much lower initial outlay of cash (versus acquisition of ATI.) AMD would still need time to develop the licensed SGX-core into a full-blown PC-style (VGA-compatible) graphics unit.

    Curious, how would this directly help? Intel and AMD x86 CPUs divert a huge (~50% or more) die-area to cache alone. I thought GPUs devote less die-area to cache (<30%.) The embedded RAM (on-die) framebuffer doesn't fit as well in a Windows/PC-environment, because the resolution-target isn't fixed.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...