No more than a CPU could function if you tore out all the ALUs.
But to follow you logic, a dual core CPU isn't dual core as it only has one master scheduler?
No more than a CPU could function if you tore out all the ALUs.
I doubt it the manufacturers are going to target an 80 mini-core CPU at the desktop.
The performance profile on today's code would be horrible, today's code will be tomorrow's legacy software, and today's legacy software will be tomorrow's legacy software.
Future desktop applications aren't going to thread out 80 ways, and past about 4 CPUs, the benefits of more cores drops fast on most of the workloads the CPU must target.
An 80 mini-core desktop CPU would basically trash performance on tasks CPUs are traditionally good at, just to do a lousy job at catching up to the GPU or a passable job at replacing the IGP (so it can do badly at everything else).
But to follow you logic, a dual core CPU isn't dual core as it only has one master scheduler?
Dual core has everything doubled. That includes all the control logic (and schedulers)... You can take one core out and still get a functional processor, as well as you could take one quad from a GPU and still get a functional part.But to follow you logic, a dual core CPU isn't dual core as it only has one master scheduler?
But to follow you logic, a dual core CPU isn't dual core as it only has one master scheduler?
That's interesting. Which desktop CPU manufacturer do they work for?I really couldn't agree more, but for some odd reason the CPU engineers I encounter don't seem to agree.
The overall idea shown so far is multi-core designs with a mixture of core types, not just a lot of mini-cores.Now with that in mind how many of these massively multi-core CPUs do Intel and AMD think they are going to sell? Obviously they will sell really well into super-computers, render farms, server warehouses, etc...
And that leaves them offering what to the masses with decent ILP performance?
The overall idea shown so far is multi-core designs with a mixture of core types, not just a lot of mini-cores.
Regardless of how parallel things become, the demand for single-threaded performance improvement isn't going to disappear.
The 80 mini-core device looked limited in what it would target if it were ever brought to market. The teraflop model that was shown required an EDRAM module bonded directly to the die, which is something that sounds pricey for desktop processors.
THE ATI R600 will represent the last of it's breed, the monolithic GPU.
It will be replaced by a cluster of smaller GPUs with the R700 generation. This is the biggest change in the paradigm since 3Dfx came out with SLI, oh so many years ago.
Basically, if you look at the architecture of any modern GPU, R5xx/6xx or G80, it comprises pretty modular units connected by a big interconnect. Imagine if the interconnect was more distributed like say an Opteron and HT, you could have four small chips instead of one big one.
This would have massive advantages on design time, you need to make a chip of quarter the size or less, and just place many of them on the PCB. If you want a low-end board, use one, mid-range use four, pimped out edition, 16. You get the idea, Lego.
It takes a good bit of software magic to make this work, but word has it that ATI has figured out this secret sauce. What this means is R700 boards will be more modular, more scalable, more consistent top to bottom, and cheaper to fab. In fact, when they launch one SKU, they will have the capability to launch them all. It is a win/win for ATI.
There have been several code names floating around for weeks on this, and we hear it is pretty much a done deal. Less concrete is the rumour that G90 will take a similar path, but things are pointing in that direction.
G80 and R600, or most likely their descendants in the next half-generation step, will be the biggest GPUs ever. I am not sure this is something to be proud of, but the trigger has been pulled on the next big thing. The big GPU is dead, long live the swarm of little GPUs. µ
you will come to the conclusion that eventually gfx-specific instructions will be added to AMD's CPU cores allowing them to process gfx, here CPUs with 4+ cores come into play, meaning that multicore shall takeover 3d rendering.
That's not strictly true. The dynamics are relatively simple: Each chip you design out of a base architecture is going to cost you millions of dollars in engineering and tape-out costs, plus all the related overhead (sales, marketing, phasing out old products, etc.) - and if that chip is aimed at too much of a niche audience for too short of a timeframe, it's not going to be worth the investment anymore, unless you feel it's necessary from a mindshare POV.I think the idea is that manufacturing one giant die is unfeasible going forward.
Edit: I should say I find this unlikely, personally. I think there will be enough special function type stuff going on as to not make this efficient. Things like Z tricks, video acceleration, etc. . . or even just all that stuff that's currently in NVIO (whatever that is). And that's before we even get to the parrallelism arguments, and whether there will ever be enough CPU "cores" to effectively replace current levels (let alone future levels) of GPU parallelism. . .
So you're expecting these cores to be identical to each other? However many cores there are in your hypothetical CPU-that's-taken-over-graphics-as-well, they're all the same?
Edit: I should say I find this unlikely, personally. I think there will be enough special function type stuff going on as to not make this efficient. Things like Z tricks, video acceleration, etc. . . or even just all that stuff that's currently in NVIO (whatever that is). And that's before we even get to the parrallelism arguments, and whether there will ever be enough CPU "cores" to effectively replace current levels (let alone future levels) of GPU parallelism. . .
didnt you read the phrase where Phil Hester said that GP CPU cores will be treatable as almost specialised hardware, and if u imagine a 48 core CPU with a GPU ISA dont u agree with me that it will be good at gfx processing?