What if AMD/ATI were to become a member of the CELL consortium and offer GPU patents?

Brimstone

B3D Shockwave Rider
Veteran
IBM and AMD already have a close relationship with fab technology.

AMD/ATI already has a extensive RAMBUS license, so fabbing a CELL cpu wouldn't cause any royalty problems.

AMD/ATI wouldn't have to remove focus from the core x86 market. It would be more of engineers combing over patents from AMD cpus and ATI gpus and seeing if they could contribute to improving and evolving the design for the better. GPU patents are something IBM/Sony/Toshiba really lacked.

Low risk and nothing really to lose by joining.

Intel an enemy/rival for all the companies.

Of course the obvious question is would IBM/Sony/Toshiba even want them on board the CELL platform?
 
Looks good :)
The question is Who and How will use it?

A new AMD/ATI visualization powerhouse with Mac OS X ?
Or maybe a vizualization oriented friendlly Linux, free of charge :cool:
 
Personally, i would hate to see it come through.

Think about it:
If all these companies were to join up on a such a large venture as what you propose, then intel would have had no choice but to counterstrike by picking up a huge pile of 3D and visualisation technologies (you see where this is going, don't ya ? ;)).

Intel + Nvidia = "monopoly of the CPU market" + "monopoly of professional 3D market/half of the discrete desktop market".

Yes, no one in his/her rational and unbiased mind can deny that both continue to do a lot for the market and technology breakthrough's, but it's never a good sign when an already tight market becomes short of one more competitor.
Less competition leads to higher prices and smaller motivation for innovative ideas to meet the real world.
 
What's so good about cell that will make AMD be a partner?

I will take an Athlon over a Cell if given the choice.
 
current Cell design sucks for developing wide-range of applications. Plain and simple.
Which developer in his mind will have the time and the resources to start writing on assembler for 7 crippled fp32-compatible mini-CPUs ? Writing business app on such platform? DB software?
Cell2 may be worth the effort.

IMHO, any sign that AMD is interested in CELL will be indication that are desparate
 
a quad core K8L ought to be enough for anybody.
I can imagine that CPU on the Xbox 720, it would be a massive enough performance increase while remaining symetric
 
IMHO, any sign that AMD is interested in CELL will be indication that are desparate

Deperate wouldn't even do it justice, more like last resort, hail mary :LOL: .

But I can see them possibly starting to introduce cell type archecture, slowly into thier cpus, but then again it will cause internal competition to some degree, so most likely not.
 
I think the Cell architecture is the only good way for the future. The nature of mainstream applications becomes more and more "computationnaly intensive", with real time physics simulations, HD video decompressions etc. The data-parallel stream architecture of the cell is the most appropriate for efficiency implementing that kind of computations, with very high arithmetic intensity. So we will have to change our way to program, even if programming such architecture seems to be more dificult when practicing traditional sequencial programming or multi-threading.
So I wont be surprised if the plans of AMD/ATI would be to build a Cell-like CPGPU, made of a x86 compatible command core connected to a set of GPU-shader's like SIMD ALUs.
ATI is going in that way unifying their shaders architecture and providing low level acces to thei hardware and AMD know multi-core can't be the good solution for the future, when we will need hundreds of parallel cores.
 
Well, there's more than one way to have an architecture that has a high degree of parallelism (like you can get with Cell's SPE's). The Cell architecture itself has some issues with making use of all of those SPE's.
 
I think the Cell architecture is the only good way for the future. The nature of mainstream applications becomes more and more "computationnaly intensive", with real time physics simulations, HD video decompressions etc. The data-parallel stream architecture of the cell is the most appropriate for efficiency implementing that kind of computations, with very high arithmetic intensity. So we will have to change our way to program, even if programming such architecture seems to be more dificult when practicing traditional sequencial programming or multi-threading.
unfortunately, the tasks you mention aren't the core actvities of a cpu

If you ask me, the future is multicore cpu's, SMP & graphics/physics/vector coprocessors
 
yes, the graphics/physics/vector coprocessor being the GPU. It's already there and is a big array of vector units with large bandwith, the DX10 generation will bring much needed flexibility (memory management, multitasking, ints), and it's much more useful than say a PPU card.

any consumer having to choose between a $300 GPU, or $100 GPU + $200 coprocessor will choose the former solution. Especially as the GPU works on all 3D games, but the coprocessor is useless without new software made for it.

Eventually PC CPUs will become Cell-like, when they'll evolve past quad core. . but then I hope AMD and Intel will get along to have a common instruction sets for the "SPEs"
 
Last edited by a moderator:
I hope this isn't too far off topic but I am surprised that there has been no mention within this thread of the possible synergies from Nvidia joining STI.

My thoughts on the topic.

Intel won't buy Nvidia as they have too little to gain.

IBM would never let AMD join STI.

IBM would probably like Cell to become the defacto 'home' processor for linux against the wintel world.

STI are lacking in graphics capability (Sorry Toshiba)

Nvidia have already built a GPU with a FlexIO interface.

Sony would like to ensure continuity for PS4 so keeping the same GPU supplier but as part of a consortium that Sony have an interest in would help keep Nvidia from fleecing Sony for licensing on next gen GPU.

Therefore it seems a natural fit to me, joining STI would not require Nvdia to relinquish their much cherished independance and success for cell could leave them as the only GPU supplier that could sell into every one of the AMD, Intel and STI markets.
 
Last edited by a moderator:
Eventually PC CPUs will become Cell-like, when they'll evolve past quad core. . but then I hope AMD and Intel will get along to have a common instruction sets for the "SPEs"

Do you mean CELL-like as in a whole bunch of cores (>4), or CELL-like as in heterogenous ISAs with a non-coherent memory model ?

I don't think we will see the latter... Ever.

We may see application specific accellerators for key markets (XML parsing, encryption/decryption engines etc. for servers), but it makes no sense to introduce a new ISA+programming model just to run a few specific problems a tad faster than a fully fledged core will.

Cheers
 
it's what STI did with the Cell (though, I think Cell is premature and most likely sucks for gaming)

what makes me think that is Intel's "vision for 2015", and AMD's recent graph with the "Torrenza on die" (though in that case it looks like to be the XML and server things)

"just to run a few specific problems a tad faster than a fully fledged core will."

to me, a Cell-like "general purpose coprocessor" doesn't seem to be about doing things faster that a normal processor (which is much more complex and efficient), but about adding a big number of cores.

maybe almost only the embarassingly parallel problems (graphics, video and other) will benefit from more than four general, classic cores. if you're doing embarassingly parallel things, a shitload of small inefficient cores is maybe faster than a handful of big efficient cores.

So why not have four big cores + 16 small ones rather than eight big cores.

that's how I can speculate with how I understand things (I'm not a programmer of multi-threaded things :))

then again : current Cell sucks. it's probably great for video encoding/decoding but with its single, weak main core it's probably bad at a good number of tasks.
 
Do you mean CELL-like as in a whole bunch of cores (>4), or CELL-like as in heterogenous ISAs with a non-coherent memory model ?

I don't think we will see the latter... Ever.

Not even as in something like a framebuffer for a unified GPU? Or the local storages for each GPU pipeline / thread context?

;)
 
then again : current Cell sucks. it's probably great for video encoding/decoding but with its single, weak main core it's probably bad at a good number of tasks.
You're thinking the wrong way around. There really aren't that many GP integer things that would benefit from many GP cores. And the SPEs are very GP as it is. They can do all but the low-level managing, which is only a tiny part of any task.

The only thing that is up for discussion is local storage with DMA versus unified storage with cache memory. And as soon as you move most of your bandwidth-hungry units onto the same die (CPU cores and (GP)GPU), it becomes a moot point.
 
Back
Top