Anandtech: AMD-ATI Merger in the Works?

perhaps AMD wants to buy ATi, then shutdown production of chipsets for Intel CPU ?

OMG, that will be soooo mean :D

Don't forget that AMD has contract with Chartered Semi for producing CPUs, and first batches of these should be out this month AFAIK.

But buying ATi is just a rumour.
 
Last edited by a moderator:
AMD and SiS makes more sense. Going off the Vista argument above - reasonable IG and solid chipsets/perhiperals and a much lower $ figure.... but then again ... I really doubt AMD is going to upset the current healthy platform support is has generated with 'partners' for it's CPU's.
 
I know what it is. Remember all the talk from Jen-Hsun about nVidia and GPUs eventually negating the need for CPUs? Well AMD sees the writing on the wall. They cant afford nVidia who is on very solid financial ground, but maybe they can afford ATI. This way, in 2010 or later they'll already have a working GPU that can run Windows! :LOL:
 
Maybe nV could buy ATI? :p

And AMD will go for ImgTech to undercut Intel on the IGP side. Simon, got any calls from AMD lately? :devilish:
 
so AMD is gaining TSMC share.
AMD has always been reluctent to make SB , or any chipsets . Then pulling NVDA+Ali type move on Intel thus costing Intel a good fall back partner in ATI, just as chipzilla is about to throw out 16,000.... due to this merger? I dont think D.O would go for a hostile play.With r600 products already contracted to TSMC, and a fair bit at UMC. So what, AMD still licences out for Nforce, N5 is already set for AM2... This whole relationship seems Googlesq.
 
The Baron said:
Consolidation also screws up the supply chain. You add a premium to the most fundamental component for something that might not necessarily be used, and if you have chips with an integrated GPU and chips without, you also drastically confuse your supply chain.

It's not better, it's not even worth considering. It's just silly.
I hardly think adding an integrated CPU/GPU to the product mix will screw up the supply chain. If it doesn't happen it will be for technical and market reasons. You might remember that a few years ago Intel thought this integration was a good idea and they designed a chip. Not sure of the name (Timna?). At some point the timing might be right for certain markets so I'd say it is worth considering periodically.
 
nAo said:
Oh..that's so reverendish!
Aaah, no!

This would be.

Reverend (not really) said:
In a personal email from John Carmack, in which he shared with me the following confidential information (which I will now proceed to share with the world for no other reason than re-re-re-reinforcing the fact that he personally emails me); the rumor is false.


;)

BTW I like the new word, somebody get to the wiki.
 
The Baron said:
General-purpose CPUs are not inherently parallel, but software can be. Look up Amdahl's Law. There is an upper-limit on parallelism in software, although you'll get plenty more benefit from multitasking and the like.
Amdahl's law is based upon an entirely wrong assumption. There is no upper limit on parallelism in software because when you are writing software for hardware that allows for more parallelism, you allow it to process more, instead of just attempting to execute the same code as you would on a single-threaded system.

And furthermore, the idea that there is some part of the program that needs to be run in serial only applies when starting and ending a program. Within a game, for example, you can make use of pipelining so that there is never any part of the game code which, during play, must be run in serial with everything else.

Consolidation also screws up the supply chain. You add a premium to the most fundamental component for something that might not necessarily be used, and if you have chips with an integrated GPU and chips without, you also drastically confuse your supply chain.
Not necessarily. This could be a good solution for low-end graphics applications (the memory bandwidth available wouldn't allow a high-end solution). In other words, such a setup could be a good product for sale in the market currently occupied by the Celeron and the Semperon. It would be a bit more expensive than these products, clearly, but may make sense if it provided lower total system cost.

Note that a reasonable amount of 3D acceleration will soon be necessary to run Windows Vista.
 
On topic, though, I'd like to say that I don't think a merger between ATI and AMD would be good for ATI's GPU's. Along with the merger would also come increased management overhead, and where ATI is now a nimble company who can make their own decisions, their product moves would now have to be filtered through AMD management, if only to obtain fab access. I find it rather likely that ATI would get the short end of the stick when it came to fab resources in the event of a merger.
 
Brimstone said:
That's the short term outlook.

ATI would help insure volume levels (i.e. lowering financial risk) as AMD invests money to increases production capacity (i.e. building new FABS).
I think this is a key observation.

Does anyone honestly think AMD won't be screwed in the coming decade? It took over 5 of performance and value leadership (aside from a little slip just before A64 was released) to get the market to even notice AMD. The P4 was a hot, expensive, and underperforming.

Now that Intel has Conroe coming out with likely superior performance to AMD, what are their chances? If Intel can dominate AMD with a poorer product, what will they do with a superior one?

The only issue is I think CPU margins are quite superior to GPU margins for the same size chip. It would be interesting if AMD could leverage their high clockspeed technology and know-how into the GPU market.
 
Perhaps the ATI and AMD are not merging, but are merely colaborating on a future product.
 
Not everyone wants an Intel only x86 market. Intel lost a lot of faith in those 5 years with AMD being seen now as a serious provider of server equipment. Last year I couldn't order an AMD based system via our internal purchasing channels. This year a roughly 1000 unit server refresh looks to be predominately using AMD processors. Quite a difference.
 
Mintmaster said:
I think this is a key observation.
The only issue is I think CPU margins are quite superior to GPU margins for the same size chip. It would be interesting if AMD could leverage their high clockspeed technology and know-how into the GPU market.

I personally don't see this happening for two key reasons. First and foremost, CPUs and GPUs follow radically different physical-design strategies. CPUs rely heavily on exotic transistor topologies and circuit design strategies to achieve their design goals (whether it's power-consumption, area, or clock-freq.) GPUs are much closer to traditional cell-based (i.e. standard-cell.) Despite advances in EDA tools, the CPU design-cycle is constrained by labor-intensive manual layouts. Furthermore, CPU-manufacturers have direct control and visibility into their own fab-lines, giving them an 'edge' in terms of attacking bleeding-edge manufacturing and semiconductor-design issues. For the CPU-vendor, direct process control mitigates some of the risk inherent to exotic ("L33T":)) circuit-design.

The second obstacle is product scheduling. GPUs don't have the same product-lifespan as CPU-lines. NVidia and ATI also maintain hectic release schedules (with several variations on a core GPU-architeture.) While they share high-level architecture heritage, from a physical/layout perspective, I'd guess they're essentially all-new layouts. (I.e., minimal re-use between NV40/NV43/NV44, etc.) Compare this with the CPU-world, where there's an alarming habit of doing an all-layer (mask set) change, with few or NO functional changes, simply to improve manufacturability.

Having said that, modern GPUs and modern CPUs are probably moving 'closer together' (from a physical design perspective.) Speed-critical portions of a GPU do receive manual (hand-layout) optimization. Non speed-critical portions of a CPU are doable with a standard-cell flow. One of Intel's past papers stated that the Pentium/4 was the first Intel CPU where CBA (cell-based automation) tools generated >50% of the CPU's logic die-area. So who knows what the future will bring.

Intel's GMA (integrated graphics core) has evolved at a comparatively slow (i.e. glacial!) pace. If it weren't for the upcoming GMA965 (with its radically revamped 3D-pipeline), the GMA9xx would have been the most-likely candidate for a hand-layout optimization.
 
Yeah, you're right asicnewbie, I just thought that maybe something could come out of it. Another big reason that it probably won't happen is that if you triple the clock speed, you need 3 times the register space to absorb latency from texture accesses. That's a big chunk of silicon right there. Your math logic occupies more space also in order for it to run that fast (probably more stages, may need more room to avoid interference, etc). In the end you'll probably have to reduce the pipeline count to increase the clockspeed, thus partially negating the advantage of such a venture.

There are a few reasons I think it may be possible though:
  • With a merger, ATI would have direct control and visibility into their own fab-lines
  • GPU features will probably reach a relative standstill soon. DX10 seems to be forward looking enough that software will need years to catch up (it's already way behind). This makes a 3-year long-term project feasible.
  • GPUs are insanely parallel, so you probably only have to hand-optimize a small portion. You could keep everything like the scheduler, triangle setup, rasterizer, etc the same. By just optimizing the shader unit, texture units, and blending units to run at 4x the normal speed, you could probably use one fourth the units.
  • With the die space you could save, why not? Money makes everything happen...
  • I think AMD and Intel have become very good at making fast, compact cache structures, and they're probably a lot better than what we see in GPUs. This could help GPU design a lot.

I still think this rumour is false, but I'm just saying it would be rather interesting to see what they come up with.

Another big factor is ATI's success in consumer devices. This could be the diversity that AMD is looking for.
 
A few cooperation points.

AMD is pushing for x86 everywhere in embedded space with its Geode series. They are going to need some kind of 3D accellerator soonish. Preferably one with good driver compatibility with PC accellerators.

AMD CPUs have memory controller on die. IGP likes to be near the memory controller. A Sempron with on die graphics doesn't sound completely insane.

AMD is working with Chartered with 65nm SOI CPU manufacturing. AMD is also working with ISI to validate the Z-RAM technology. If Z-RAM works(even nearly) as advertised, 65nm SOI Xenos with Z-RAM would probably be cheaper and have much lower power consumption than the current model. Actually, if Z-RAM works as advertised, any new high end GPU design utilizing it would likely A) be very fast and B) have very low power consumption.
 
  • Like
Reactions: Geo
tmp said:
AMD is also working with ISI to validate the Z-RAM technology. If Z-RAM works(even nearly) as advertised, 65nm SOI Xenos with Z-RAM would probably be cheaper and have much lower power consumption than the current model. Actually, if Z-RAM works as advertised, any new high end GPU design utilizing it would likely A) be very fast and B) have very low power consumption.

Hrrm. I wonder how that license is written. I don't think I've heard anyone else address whether that tech is suitable for gpus.
 
tmp said:
A few cooperation points.
AMD is pushing for x86 everywhere in embedded space with its Geode series. They are going to need some kind of 3D accellerator soonish. Preferably one with good driver compatibility with PC accellerators.

AMD could always license an Imgtec SGX core, just as Intel has already done. The licensing route would represent a much lower initial outlay of cash (versus acquisition of ATI.) AMD would still need time to develop the licensed SGX-core into a full-blown PC-style (VGA-compatible) graphics unit.

AMD is working with Chartered with 65nm SOI CPU manufacturing. AMD is also working with ISI to validate the Z-RAM technology. If Z-RAM works(even nearly) as advertised, 65nm SOI Xenos with Z-RAM would probably be cheaper and have much lower power consumption than the current model. Actually, if Z-RAM works as advertised, any new high end GPU design utilizing it would likely A) be very fast and B) have very low power consumption.

Curious, how would this directly help? Intel and AMD x86 CPUs divert a huge (~50% or more) die-area to cache alone. I thought GPUs devote less die-area to cache (:love:0%.) The embedded RAM (on-die) framebuffer doesn't fit as well in a Windows/PC-environment, because the resolution-target isn't fixed.
 
Back
Top