Predict: The Next Generation Console Tech

Status
Not open for further replies.
I agree the console GPU could even be Sea Island, though I'd call Sea Island a GCN. it's a refresh of the same architecture. bringing maybe cohrency protocol (or is it disabled on 7000 series because there's no coherent PCIe yet?)

while the 7770 was quickly panned by bean counters and people addicted to their 200W card, wait till it comes down to 100 euros and it will be pretty awesome. such card don't even need meaningful cooling in your PC case.

it's still cheaper than the Cell SPE speculations we've had for years here. a GCN is somewhere between an older GPU and a Cell for usability in computation. GPU computing may also simply be used in the graphics pipeline, just not operating on pixel quads or vertex (current easiest example is decompression but more tasks will come. decompression of displacement maps for one thing)
 
I agree the console GPU could even be Sea Island, though I'd call Sea Island a GCN. it's a refresh of the same architecture. bringing maybe cohrency protocol (or is it disabled on 7000 series because there's no coherent PCIe yet?)

while the 7770 was quickly panned by bean counters and people addicted to their 200W card, wait till it comes down to 100 euros and it will be pretty awesome. such card don't even need meaningful cooling in your PC case.

it's still cheaper than the Cell SPE speculations we've had for years here. a GCN is somewhere between an older GPU and a Cell for usability in computation. GPU computing may also simply be used in the graphics pipeline, just not operating on pixel quads or vertex (current easiest example is decompression but more tasks will come. decompression of displacement maps for one thing)
Indeed it's perfs are more than welcome for the power package. I can't no longer suffer even mid tower PC. So I'm one of the few potential costumer.

Back to GCN it's just too bad nobody with actual knowledge from AMD can speak here. I'm really curious about how various "configurations" affect the efficiency of previous generation.
That sounds obscure by "configurations" I mean SIMD "length", amount of LDS per SIMD array number of SIMD arrays per ultra threaded processor / command processor.

The idea would be to know where is the "sweet post" in efficiency for old AMD vliw5 architectures.
AMD searches the sweet spot but based on some performances target because a GPU has to be competitive. I'm speaking of a real sweet spot like past this point and if you want to get this target you better off with 2, 3 or more tiny full blown GPU.
I read multiple times from the most qualified members here that multi-gpu it's not much of a problem by it-self it's more a PC API problem and the fact that developers can't take for granted.

I'm not sure I'm clear but what I'm saying is "how would a multi-GPU based on previous "slim" VLIW5 architectures could compare to "fat" GCN"

For example Cap verde is literally 2 kurts (a bit more and there would be redundant parts as memory controllers, UVD engine). I'm not considering it because of rumors but because it fits well as an example.
The close comparison I can find is civilization 5 and 2 full blown kurts do better than around the same as GCN cap Verde. I believe that GCN would still obliterate such a design in compute task (where it is impressive fluid simulation is amazing with ~100% improvement for a 50% increase in transistors).

But Kurts might not be the sweet point, the sweet spot for efficiency may be lower in a performance ballpark where there is no market. What I got thinking about is to make "cheap" change to alleviate some restriction of the previous architecture, stay in a tiny sweet spot and then rely on multi GPU processing.

As an example it could be something like this (I've no clue):
Moving SIMD width from 16 VLIW5 units to 8VLIW5 units
Execute the same wave front on this arrays (hidding twice as much latencies)
Don't touch the amount of LDS and L1 texture cache per Array.
2 tex units per array
8 SIMD array
24 Z/Stencil ROP Units
6 Color ROP Units
XKB of L2
+dedicated hardware (from the command processor to rasterizer, etc.)

So you get that optimal GPU and then copy paste it according to your budget. A bit the same approach as in nowadays consoles multiple light IO CPU cores versus high performance mono cores (at launch or within the transistor budget at launch).

What do you guys think?
Now I think it's giving me headache.... :LOL:
 
The idea would be to know where is the "sweet post" in efficiency for old AMD vliw5 architectures.

The sweet spot in PC is different than consoles. For consoles, the sweet spot will be whatever is in the box as software will be coded to take advantage of it and the cpu present.

I know this doesn't answer your question, but it should be kept in mind. :smile:

I think the idea of seeking a vliw5 for performance/die size is a bit backwards.

Console design has always been forward looking. Not necessarily peak for today's games, but built with flexibility and forethought into how tomorrow's games may be made.

I expect this to equate to a GCN-type core for both PS3 and xb360.

And yes, I expect AMD will not have an issue licensing their GCN tech to make money off of it as they aren't that short sighted either.
 
I won't post further but indeed AMD may have see MSFT comes with x Hundred millions say ok to everything without questioning anything than "we're saved for 2 quarters". That's why ultimately they are set to disappear sooner than later, may be why brightest people are leaving the company, etc.
Since when has the talent starting leaving MS ?
I have not heard anything myself.
 
The sweet spot in PC is different than consoles. For consoles, the sweet spot will be whatever is in the box as software will be coded to take advantage of it and the cpu present.
I know this doesn't answer your question, but it should be kept in mind. :smile:
Not really what I'm talking about but I agree with what you said.
I'm speaking of a technical sweet spot. I don't know how scale wiring inside an AMD GPU with the number of SIMD arrays for example, when the dispatch/ultra threat processor work in a optimal fashion, is the command processor sometime a bottleneck?
In GCN AMD add more logic but try not to break their "SIMD strutucture **" which achieves high density. So a compute unit is a bit of a 'tinier GPU' to me an old SIMD acting as multiple.
** AMD SIMD structure seems to have move like this:
x4 quad SIMD5 units (as in old design / xenos)
x4 quad VLIW5 units
x4 quad VLIW4 units
x4 quad SIMD4 units (GCN)
Some hinted at least for the last step that the underlying hardware is not changing, it's more how it is fed that's changing. In GCN each quad of what was called a SIMD is addressed separately.

I wonder how much the VLIW5 SIMD arrays share in common with Xenon SIMD arrays. I would not be that surprised if they are close (with obviously the VLIW5 being improved) and the moain difference is the way they are feed. so I've been considering for a while how VLIW5 may make xenon emulation easier.

I think the idea of seeking a vliw5 for performance/die size is a bit backwards.
Depending on what one wants, it may be easier to scale up a design then scale another down. VLIW5/VLIW4/SIMD4 that's not really the only concern we don't know what is responsible for the increase in transistor from one generation to another. I would not be surprised if the move to vliw4,5 SIMD doesn't make much of a difference in transistor count, it may be more about LDS, cache, glue logic, etc. I don't know.
I consider VLIW 5 as a whole because they were less "fat" to them overall. They also seem to perform better in graphic tasks which even if compute is becoming more and more relevant is still not a moot point. There is also the possible (easier) BC implications.
Console design has always been forward looking. Not necessarily peak for today's games, but built with flexibility and forethought into how tomorrow's games may be made.
That's why I'm digging this multi "tiny GPU" idea ;) GCN looks a bit like modern CPUs, it's trying to do a lot of stuffs, to which extend lighter GPUs could be competitive? I wonder.
And GCN still doesn't scale perfectly even in compute bench see Tahiti is around twice Cap verde perfs or less. It's possible that actually the "sweet spot" in efficiency is even below Cap verde. So that's the idea putting PC restriction aside when is it better to simply have multiple GPU running in parrallel than one trying to multitask, facing various problem, being fat and super complex ,etc.

NB I'm speaking of multiple GPU on a same die so sharing a memory connection but as we speak of compute bandwidth seems to be less a of bottleneck than in rendering. Speaking off benches or real world use, not being bandwidth limited if you run the bench on two or more GPU you' re likely to have x2 the result or more.
I expect this to equate to a GCN-type core for both PS3 and xb360.

And yes, I expect AMD will not have an issue licensing their GCN tech to make money off of it as they aren't that short sighted either.
I don't know some more exotic part has an obvious geeky appeal.

From a business pov, whatever AMD does I hope they did it at the right price. I think of it a bit like F1: GPU is engine CPU is the pilot, the brand is paying advertisement and gaz (which F1 use a lot).
It makes sense for all the actors to get their fair share of the benefits (say there are profits in F1 it's more marketing benefits but anyway...)as without one of the actors there is no team any longer. Some in standard market situations i would call the Nvidia and Intel situation with the xbox "normal". They sell what they have at fair price. If the brand subsidize like crazy it's not their problem even less the their fault.
That's a bit of a problem Sony did not had when they were producing in house. Now that's different, SOny, Nintendo MS without some critical IP they are going nowhere.
IBM as extra capacity (foundry and engineering) so they are willing to let critical technology at a bargain to run a division in the grey.
AMD now I'm not that sure that they have intensive to be that much better than Nvidia. They are critical to another brand business plan. They are dominating the market in perfs/costs. Without being a bad partner asking a lot more than previously makes a lot of sense.
I see an inerrant problem with subsidizing when you're no longer in a situation to produce any of the critical IP for your product. Say you plan to do billions out of your product how much the critical IP providers should ask you? I would say quiet a fair share and you subsiding your product is completely irrelevant to them (if not completely quiet a bit).
IBM has intensive to let a good deal. AMD I'm less convince especially in shitty financial they need to make a bunch of money out of the deal, they have room before other start to look remotely attractive. So GCN or not might not be the issue but more the kind of deal with MS selling an IP (whatever it's) or a more Nvidia approach like in the xbox? I hope AMD realizes this that's all.
To me and looking at their overwhelming competitive advantage, they should not let MSFT buy an IP and get royalties per chips and secure incomes on an extended period.
What would they do... that's another matter, they are loosers there is no other word to qualify them and they may bankrupt soon as with windows on ARM and the sales of tablets and phones, etc. Intel might not be suitable for anti trust policies soon.
 
Last edited by a moderator:
...i would call the Nvidia and Intel situation with the xbox "normal". They sell what they have at fair price. If the brand subsidize like crazy it's not their problem even less the their fault...

I'll address the rest of your post in a bit but I wanted to get some clarification from you on this.

Why would you think the xbox1 deal was normal?

By going about business in this way, the console maker cannot have control over costs and by extension, they don't have control of either the MSRP, or the profit/loss.

MS was new to the game and so did what they could do and learned their lesson, but console makers do not buy chips in this manor for the reasons I've outlined. They license/buy the IP, and produce the chip at the best price available. This way they have control of pricing.

Sega did this way back when, Sony did this way back when, and now MS learned their lesson.

I'm not sure about Nintendo as I don't think they ever shrink their chips, so they may not have IP ownership control to do so ...


For those licensing their IP, there's nothing wrong with making some money on a product without price-gouging. ARM has made a fortune with this practice! :smile:

Also, I'm quite sure IBM is also not getting worked over in anyway for the work they put into Xenon and Cell.

If any of the above felt they were being treated unfairly, they wouldn't have continued doing business together.

Hence why Nvidia, not in xb360, and intel nowhere to be found, yet AMD and IBM are both (rumored) working on new console chips for Sony/MS.
 
I wonder how much the VLIW5 SIMD arrays share in common with Xenon SIMD arrays. I would not be that surprised if they are close (with obviously the VLIW5 being improved) and the moain difference is the way they are feed. so I've been considering for a while how VLIW5 may make xenon emulation easier.
Closest thing to Xenon shaders would be X1k vertex shaders
 
Question:

Would the Cayman architecture yield more performance/watt/area than GCN?
If so, would the closed nature of a console make this type of architecture more attractive for a console manufacturer?
The GPGPU element does not seem completely relevant to consoles currently or (IMHO) the near future. What I mean is that this generation we have not seen multitude of apps on console other than games, browsing, media playback and shouting swear words at friends across the world whilst playing an FPS.

Result - why bother with GCN or GCN type architecture for a console?
 
I'll address the rest of your post in a bit but I wanted to get some clarification from you on this.

Why would you think the xbox1 deal was normal?

By going about business in this way, the console maker cannot have control over costs and by extension, they don't have control of either the MSRP, or the profit/loss.

MS was new to the game and so did what they could do and learned their lesson, but console makers do not buy chips in this manor for the reasons I've outlined. They license/buy the IP, and produce the chip at the best price available. This way they have control of pricing.

Sega did this way back when, Sony did this way back when, and now MS learned their lesson.

I'm not sure about Nintendo as I don't think they ever shrink their chips, so they may not have IP ownership control to do so ...


For those licensing their IP, there's nothing wrong with making some money on a product without price-gouging. ARM has made a fortune with this practice! :smile:

Also, I'm quite sure IBM is also not getting worked over in anyway for the work they put into Xenon and Cell.

If any of the above felt they were being treated unfairly, they wouldn't have continued doing business together.

Hence why Nvidia, not in xb360, and intel nowhere to be found, yet AMD and IBM are both (rumored) working on new console chips for Sony/MS.
Because MS got for what it paid for: a top of the line CPU and a top of the line GPU.
There was no bad business practice from either Intel or Nvidia, the problem was with MS not managing to sell what it bought, ie the best money could buy.

Now thanks to circumstances IBM is willing to sell high end CPU IP for "cheap". Consistences had it that ATI did the same last time for MS.
Thing is wanting to have control of you costs is nice but you have a problem you aim at high end and now none of the console actors are able to produce themselves a single critical part of their upcoming design. See the problem?
An utter dependance on other companies IPs and so their willingness to give them for cheap.
That's clearly something that bothering for the loss leading model introduced by Sony when they were producing their own stuffs.

It's a bit like in the embedded world as performances goes up at some points company that produce nothing themselves will be in a dead spot, at the mercy of a few providers that can keep the price where they are comfortable hitting your margins or/and putting the same chips in their own devices they sell cheaper hitting your margins again (=>slow death). We might not be there with console.

We might not be here with console, AMD my have gotten the big bag of money their tech deserves. But looking forward it's problematic as console manufacturers would turn in what they've been for one gen already, OEM with low volumes and that could be dealt with as such by companies owning the critical IPs for their products. Then even if you bough an IP licensed for an IP you have to compete for foundry place with lower volume too.
 
Last edited by a moderator:
Question:

Would the Cayman architecture yield more performance/watt/area than GCN?
If so, would the closed nature of a console make this type of architecture more attractive for a console manufacturer?
The GPGPU element does not seem completely relevant to consoles currently or (IMHO) the near future. What I mean is that this generation we have not seen multitude of apps on console other than games, browsing, media playback and shouting swear words at friends across the world whilst playing an FPS.

Result - why bother with GCN or GCN type architecture for a console?

Sony felt the same way about unified shaders.
 
Question:

Would the Cayman architecture yield more performance/watt/area than GCN?
If so, would the closed nature of a console make this type of architecture more attractive for a console manufacturer?
The GPGPU element does not seem completely relevant to consoles currently or (IMHO) the near future. What I mean is that this generation we have not seen multitude of apps on console other than games, browsing, media playback and shouting swear words at friends across the world whilst playing an FPS.

Result - why bother with GCN or GCN type architecture for a console?
GCN offers better perf per watts and sq.mm but it uses a different process.
But GCN achieves super good transistor density it's like a perfect shrink overall twice as much transistor per sq.mm
Compute perfs explode, graphical performance a bit less but I re-read review and I got to acknowledge that I'm a little hard with the thing.
What it achieves in the face of cards supported by 256bit bus is pretty impressive in some places.

I guess it's may be more of a AMD positioning issue. Tahiti is super high end, Pitcairn could look like a GCN equivalent of cypress, Cap verde as replacement of juniper after no refresh for a long while while worse it as a product may not be enough for a juniper owner to upgrade.
An equivalent of Barts (~ half Tahiti) with 192 bit bus cheap 1GHz GDDR5 might have been greater show off (abeilt at higher price range just a bit higher hd 6850/70).

Something with 8/6 GCN SIMD equivalent (like a 7750 and lesser) may have been better labelled as 7670 and lower. The chip are in the die size range.

(basically product line would be tahiti, 3/4 tahiti, 1/2 tahiti, 1/4 tahiti).
 
Last edited by a moderator:
Because MS got for what it paid for: a top of the line CPU and a top of the line GPU.
There was no bad business practice from either Intel or Nvidia, the problem was with MS not managing to sell what it bought, ie the best money could buy.

Yes, but the deals that Nvidia and intel struck were not conducive to a longterm relationship.

For some companies, this is fine. They look around for opportunities and seek to make the most profit of each one, regardless of the longterm business relationship consequence, others look to build a relationship that is viable for both parties in the long run.

Intel at the time was (and still is ... for now) selling gangbusters in the PC space and so any partnership in the console space was not seen as critical to their longterm plans.

Nvidia at the time was dominating the PC space and again, did not see the need to develop a longterm business partnership with Microsoft.


Both of these shortsighted near-term profit deals brought a good deal of money to intel and Nvidia. But neither company one was considered for the followup xbox2, and absent as well for xbox3.

Nvidia was fine with that as they teamed up with Sony, and intel can still sell as much as they can make, so no big loss for them.

But note that Nvidia did bend on the deal with Sony. Sony isn't charged per chip as Nvidia did with MS on Xbox1.

___________________________

Just because there is a method to maximize profit in the short term does not mean it is in the business' best longterm interests.

Some businesses take this into account, others do not. Remember, there will always be competition where money is to be made.

Apply this as you will for nextgen tech.
 
Depending on what one wants, it may be easier to scale up a design then scale another down.

The BC of the design is important IMO, but I don't anticipate AMD engineers having an issue with emulating Xenos.

Assuming that to be the case (I could be wrong), I'd think the most cost effective solution would be to use an existing GCN design. Perhaps even a binned part that could also be used for PC GPUs with the "glue" being on the much smaller CPU...;)

NB I'm speaking of multiple GPU on a same die ...

I think this is another interesting solution (I've suggested something similar) to the possible problem of yield on 28nm, but any yield issues should be resolved by 2013 if they are intending to wait until then. However, if they are planning a more moderate GPU die size (~250mm2), such drastic measures would seem to be unnecessary. Especially if they can leverage the same chip in the PC space. :cool:


To me and looking at their overwhelming competitive advantage, they should not let MSFT buy an IP and get royalties per chips and secure incomes on an extended period.
What would they do... that's another matter, they are loosers there is no other word to qualify them and they may bankrupt soon as with windows on ARM and the sales of tablets and phones, etc. Intel might not be suitable for anti trust policies soon.

AMD may be making the best GPU for console purposes, but they aren't the only game in town. Nvidia is a possible solution (assuming), PowerVR is making strides, or even hire key engineers and produce the chip fully in-house.

But for AMD, turning away MS or Sony is to turn away "free" money.
Whatever deal AMD gets will be more money than they would get without the deal and would likely lead to further sales in the PC space. It's also in AMD's best interest to be a close MS partner for future DirectX implementation.

AMD's financial woes are not due to their deal with MS (or Nintendo) "working them over".
The CPU side of the company is another deal entirely...
 
So I'm wondering If Sony decides to launch PS4 in 2013, what are the chances that they would put a GTX 600 series card in the machine if they can afford it and stay with Nvidia?
 
We already know Sony is going for a SOC solution, that means a heavily customised GPU component. So no GTX 600 series card or gpu.
 
Status
Not open for further replies.
Back
Top