Mintmaster said:
I think this is a key observation.
The only issue is I think CPU margins are quite superior to GPU margins for the same size chip. It would be interesting if AMD could leverage their high clockspeed technology and know-how into the GPU market.
I personally don't see this happening for two key reasons. First and foremost, CPUs and GPUs follow radically different physical-design strategies. CPUs rely heavily on exotic transistor topologies and circuit design strategies to achieve their design goals (whether it's power-consumption, area, or clock-freq.) GPUs are much closer to traditional cell-based (i.e. standard-cell.) Despite advances in EDA tools, the CPU design-cycle is constrained by labor-intensive manual layouts. Furthermore, CPU-manufacturers have direct control and visibility into their own fab-lines, giving them an 'edge' in terms of attacking bleeding-edge manufacturing and semiconductor-design issues. For the CPU-vendor, direct process control mitigates some of the risk inherent to exotic ("L33T"
) circuit-design.
The second obstacle is product scheduling. GPUs don't have the same product-lifespan as CPU-lines. NVidia and ATI also maintain hectic release schedules (with several variations on a core GPU-architeture.) While they share high-level architecture heritage, from a physical/layout perspective, I'd guess they're essentially all-new layouts. (I.e., minimal re-use between NV40/NV43/NV44, etc.) Compare this with the CPU-world, where there's an alarming habit of doing an all-layer (mask set) change, with few or NO functional changes, simply to improve manufacturability.
Having said that, modern GPUs and modern CPUs are probably moving 'closer together' (from a physical design perspective.) Speed-critical portions of a GPU do receive manual (hand-layout) optimization. Non speed-critical portions of a CPU are doable with a standard-cell flow. One of Intel's past papers stated that the Pentium/4 was the first Intel CPU where CBA (cell-based automation) tools generated >50% of the CPU's logic die-area. So who knows what the future will bring.
Intel's GMA (integrated graphics core) has evolved at a comparatively slow (i.e. glacial!) pace. If it weren't for the upcoming GMA965 (with its radically revamped 3D-pipeline), the GMA9xx would have been the most-likely candidate for a hand-layout optimization.