It's not obvious to me that it even makes sense. x86 cores have to support all sorts of legacy cruft and little niceties if you want half-decent single-threaded performance. Grafting all of this in to a GPU-like core purely for the sake of being able to intermix the instruction streams doesn't seem to me like such a hot idea (more like con-Fusion!). Why is it necessary? What's wrong with heterogeneous cores?
Something doesn't add up.
Something doesn't add up.