The sweet spot in PC is different than consoles. For consoles, the sweet spot will be whatever is in the box as software will be coded to take advantage of it and the cpu present.
I know this doesn't answer your question, but it should be kept in mind. :smile:
Not really what I'm talking about but I agree with what you said.
I'm speaking of a technical sweet spot. I don't know how scale wiring inside an AMD GPU with the number of SIMD arrays for example, when the dispatch/ultra threat processor work in a optimal fashion, is the command processor sometime a bottleneck?
In GCN AMD add more logic but try not to break their "SIMD strutucture **" which achieves high density. So a compute unit is a bit of a 'tinier GPU' to me an old SIMD acting as multiple.
** AMD SIMD structure seems to have move like this:
x4 quad SIMD5 units (as in old design / xenos)
x4 quad VLIW5 units
x4 quad VLIW4 units
x4 quad SIMD4 units (GCN)
Some hinted at least for the last step that the underlying hardware is not changing, it's more how it is fed that's changing. In GCN each quad of what was called a SIMD is addressed separately.
I wonder how much the VLIW5 SIMD arrays share in common with Xenon SIMD arrays. I would not be that surprised if they are close (with obviously the VLIW5 being improved) and the moain difference is the way they are feed. so I've been considering for a while how VLIW5 may make xenon emulation easier.
I think the idea of seeking a vliw5 for performance/die size is a bit backwards.
Depending on what one wants, it may be easier to scale up a design then scale another down. VLIW5/VLIW4/SIMD4 that's not really the only concern we don't know what is responsible for the increase in transistor from one generation to another. I would not be surprised if the move to vliw4,5 SIMD doesn't make much of a difference in transistor count, it may be more about LDS, cache, glue logic, etc. I don't know.
I consider VLIW 5 as a whole because they were less "fat" to them overall. They also seem to perform better in graphic tasks which even if compute is becoming more and more relevant is still not a moot point. There is also the possible (easier) BC implications.
Console design has always been forward looking. Not necessarily peak for today's games, but built with flexibility and forethought into how tomorrow's games may be made.
That's why I'm digging this multi "tiny GPU" idea
GCN looks a bit like modern CPUs, it's trying to do a lot of stuffs, to which extend lighter GPUs could be competitive? I wonder.
And GCN still doesn't scale perfectly even in compute bench see Tahiti is around twice Cap verde perfs or less. It's possible that actually the "sweet spot" in efficiency is even below Cap verde. So that's the idea putting PC restriction aside when is it better to simply have multiple GPU running in parrallel than one trying to multitask, facing various problem, being fat and super complex ,etc.
NB I'm speaking of multiple GPU on a same die so sharing a memory connection but as we speak of compute bandwidth seems to be less a of bottleneck than in rendering. Speaking off benches or real world use, not being bandwidth limited if you run the bench on two or more GPU you' re likely to have x2 the result or more.
I expect this to equate to a GCN-type core for both PS3 and xb360.
And yes, I expect AMD will not have an issue licensing their GCN tech to make money off of it as they aren't that short sighted either.
I don't know some more exotic part has an obvious geeky appeal.
From a business pov, whatever AMD does I hope they did it at the right price. I think of it a bit like F1: GPU is engine CPU is the pilot, the brand is paying advertisement and gaz (which F1 use a lot).
It makes sense for all the actors to get their fair share of the benefits (say there are profits in F1 it's more marketing benefits but anyway...)as without one of the actors there is no team any longer. Some in standard market situations i would call the Nvidia and Intel situation with the xbox "normal". They sell what they have at fair price. If the brand subsidize like crazy it's not their problem even less the their fault.
That's a bit of a problem Sony did not had when they were producing in house. Now that's different, SOny, Nintendo MS without some critical IP they are going
nowhere.
IBM as extra capacity (foundry and engineering) so they are willing to let critical technology at a bargain to run a division in the grey.
AMD now I'm not that sure that they have intensive to be that much better than Nvidia. They are critical to another brand business plan. They are dominating the market in perfs/costs. Without being a bad partner asking a lot more than previously makes a lot of sense.
I see an inerrant problem with subsidizing when you're no longer in a situation to produce any of the critical IP for your product. Say you plan to do billions out of your product how much the critical IP providers should ask you? I would say quiet a fair share and you subsiding your product is completely irrelevant to them (if not completely quiet a bit).
IBM has intensive to let a good deal. AMD I'm less convince especially in shitty financial they need to make a bunch of money out of the deal, they have room before other start to look remotely attractive. So GCN or not might not be the issue but more the kind of deal with MS selling an IP (whatever it's) or a more Nvidia approach like in the xbox? I hope AMD realizes this that's all.
To me and looking at their overwhelming competitive advantage, they should not let MSFT buy an IP and get royalties per chips and secure incomes on an extended period.
What would they do... that's another matter, they are loosers there is no other word to qualify them and they may bankrupt soon as with windows on ARM and the sales of tablets and phones, etc. Intel might not be suitable for anti trust policies soon.