Predict: The Next Generation Console Tech

Status
Not open for further replies.
What if it's not dedicated to GPGPU functions? What if it's flexible, like the Cell processor is (switching between functions at any given moment)?
Then why bother with discrete parts? If you have 300 mm² total silicon budget at launch, what is the advantage of 50 mm² CPU + 100 mm² integrated GPU + 150 mm² over 50 mm² CPU + 250 mm² GPU? The latter is easier to design, implement, and code for. It may cost more regards yields, and I suppose if they use a fancy memory layer on the APU then there could be a BW advantage, which may be worth considering. But with the tech that we have now, I don't see any advantage to splitting the GPU workload over two chips.
 
Then why bother with discrete parts? If you have 300 mm² total silicon budget at launch, what is the advantage of 50 mm² CPU + 100 mm² integrated GPU + 150 mm² over 50 mm² CPU + 250 mm² GPU? The latter is easier to design, implement, and code for. It may cost more regards yields, and I suppose if they use a fancy memory layer on the APU then there could be a BW advantage, which may be worth considering. But with the tech that we have now, I don't see any advantage to splitting the GPU workload over two chips.

Because, it's not available in the necessary power envelope?
 
Then why bother with discrete parts? If you have 300 mm² total silicon budget at launch, what is the advantage of 50 mm² CPU + 100 mm² integrated GPU + 150 mm² over 50 mm² CPU + 250 mm² GPU? The latter is easier to design, implement, and code for. It may cost more regards yields, and I suppose if they use a fancy memory layer on the APU then there could be a BW advantage, which may be worth considering. But with the tech that we have now, I don't see any advantage to splitting the GPU workload over two chips.

Considering they can go up to 500mm² for SOCs, (Nvidia regularly breaks the 500mm² mark for their gpus) , there is no reason not to combine everything going forward.

If heat becomes a problem just lower the clocks.

8 jaguar cores ~ 50mm²
24 GCN2 cu ~ 250mm²
Dsps, esram, etc ~ 100mm²

Totally doable.
 
BoardBonobo said:
But they've got complete backwards compatibility with Gaikai. Why bother with the expense for that and put the money towards a more powerful gpu or smarter HCI.

Well...there goes the cost savings....I also cant imagine Sony having all of the PS2 and PS3 titles available on servers. Only selected games
 
"If anyone would build a new console today, that would be the result," Battlefield 3 executive producer Patrick Bach told Eurogamer in reference to the PC version of the game.
"At least. Probably more, because it's classic PC technology. We know everything about multi-threading now. We know everything about multi-graphics card solutions now. If someone built a console where the specs are that or more, we have the technology to do something. We could port the game to that console tomorrow."
DICE built Battlefield 3 using its new Frostbite 2 engine, designed to future proof the studio and work with the next Xbox and PlayStation.
Bach said the next-generation is a case of more horsepower - in particular multiple processors and graphics cards in a single unit.
"There's nothing we know about now that the new consoles would do differently, rather do more," Bach explained. "More processors. Bigger memory pools. Everything we have and more.
"The big step is to go from single processor to multi-processor. Single graphics card to multi-graphics card. To multi-memory. Do you do multiple memory pools or one memory pool? Since we can handle both consoles now, we control that as well. We have all the streaming systems. We have whatever we might need for the future.
"I would be surprised if there were something we couldn't do with the next-generation of consoles."
As part of an investigation into the next-generation of consoles, Crysis 2 developer Crytek UK told Eurogamer that visuals achieved using the DirectX 11 graphical benchmark were an appropriate indication of what the next Xbox and PlayStation will be capable of.
But with this extra horsepower stuffed inside new consoles, won't they be expensive?
Not so, according to Bach.

Multi-processor and multi-GPUs... Is it something like the "APU+GPU" idea? :?:
 
Then why bother with discrete parts? If you have 300 mm² total silicon budget at launch, what is the advantage of 50 mm² CPU + 100 mm² integrated GPU + 150 mm² over 50 mm² CPU + 250 mm² GPU? The latter is easier to design, implement, and code for. It may cost more regards yields, and I suppose if they use a fancy memory layer on the APU then there could be a BW advantage, which may be worth considering. But with the tech that we have now, I don't see any advantage to splitting the GPU workload over two chips.

Would that same logic still apply if the GPGPU in the APU is being used for it's computing power & not as a GPU?
 
Then why bother with discrete parts? If you have 300 mm² total silicon budget at launch, what is the advantage of 50 mm² CPU + 100 mm² integrated GPU + 150 mm² over 50 mm² CPU + 250 mm² GPU? The latter is easier to design, implement, and code for. It may cost more regards yields, and I suppose if they use a fancy memory layer on the APU then there could be a BW advantage, which may be worth considering. But with the tech that we have now, I don't see any advantage to splitting the GPU workload over two chips.
Indeed if you are willing to produce a quiet big chip, 250mm^2, I wonder if the most cost efficient solution is, going by your figures, to simply discard all the others elements and do with the 250 mm^2 chip alone. That would translates into a 50mm^2 CPU with a 200mm^2 integrated CPU.
That barely giving away 20% of the GPU.

The issue is that is still significantly costlier to produce a 250mm^2 than 190mm^2 and tinier ones. I'm always back to the Cell case and KK comments about how IBM were pissed off by his choice to go with 8 SPU and thus having a chip bigger than 190mm^2.

If late Nvidia presentation have truth to them it may have gotten worse.
So you may be back to 2 chips, but using the same Nvidia presentation, designing, implementing 2 different and complex chips could prove quiet expansive.
I won't loop to my dual SoC rant
 
Last edited by a moderator:
Would that same logic still apply if the GPGPU in the APU is being used for it's computing power & not as a GPU?
Yes. Whatever you'd run on GPGPU in the APU, you could run on a larger discrete GPU. It'll be finished sooner and then you'd return to rendering graphics. Potentially close workload between GPU and CPU in an APU could execute faster, but you have the lack of versatility when the devs want pure graphics work.
 
Indeed if you are willing to produce a quiet big chip, 250mm^2, I wonder if the most cost efficient solution is, going by your figures, to simply discard all the others elements and do with the 250 mm^2 chip alone. That would translates into a 50mm^2 CPU with a 200mm^2 integrated CPU.
That barely giving away 20% of the GPU.

The issue is that is still significantly costlier to produce a 250mm^2 than 190mm^2 and tinier ones. I'm always back to the Cell case and KK comments about how IBM were pissed off by his choice to go with 8 SPU and thus having a chip bigger than 190mm^2.

If late Nvidia presentation have truth to them it may have gotten worse.
So you may be back to 2 chips, but using the same Nvidia presentation, designing, implementing 2 different and complex chips could prove quiet expansive.
I won't loop to my dual SoC rant

250mm^2 is not big chip, it's a bit below medium size. The SOCs going into next gen systems will most likely dwarf a 250mm^2 chip.

It's only big by Nintendo standards.
 
250mm^2 is not big chip, it's a bit below medium size. The SOCs going into next gen systems will most likely dwarf a 250mm^2 chip.

It's only big by Nintendo standards.

Another thing is that the chip is initially going to be build on 28 nm, but they'll probably start producing them on 22 nm or 20 nm within a year of the console's introduction. I bet they designed the chip with that in mind as well. If you start with a 250 mm² chip on 28 nm you'll get problems with the memory interface and other interfaces when you shrink it. The chip will get too small to support all those interfaces.

I'm really curious why so many people here seem to think it unlikely to expect a chip of say, 450 mm²? Would that really be so bad? They'll shrink it down in a little while anyway.

EDIT: This also opens up the question of what kind of interfaces it will have and how big the memory interface will be.
 
From what I understand, if you are not in need to reduce heat or up frequency, new processes are to expensive for 1-2 years from introduction, and going forward it looks even worst
 
From what I understand, if you are not in need to reduce heat or up frequency, new processes are to expensive for 1-2 years from introduction, and going forward it looks even worst

That didn't seem to bother Microsoft too much with their Xenon CPU. In other words, I don't think it's unrealistic to expect the SoC to be build on 20 nm towards the end of 2014, roughly a year after the console's introduction. The first 20 nm products might be on the market by the end of 2013, to put things into perspective. Oh, and Intel will be manufacturing their chips on 14 nm in 2014.
 
From what I understand, if you are not in need to reduce heat or up frequency, new processes are to expensive for 1-2 years from introduction, and going forward it looks even worst

How can getting more chips per wafer make it too expensive and not worth while?
 
Regarding the next xbox, and it possibly having a 1.6ghz CPU, I saw this posted on another forum and was wondering what you guys thought:
"This is highly unlikely, but has anyone considered that the 1.6 GHz chip referred to by Hector Martin is a 16-core Bulldozer running at 1.6 GHz? There is such a beast. It called the AMD Opteron 6262 HE (High Efficiency). It's an 8-module, 16-core chip, running at 1.6 GHz, with a TDP of 85W."
The die size is 315mm^2 at 32nm fab (i7 2600 is at 216mm^2 and A10 rumored to be in the PS4 is 246mm^2).

If the rumors of regarding the first dev kit are true, it featured something like an Intel SB i7. If dev kits are meant to simulate final hardware, why go from an i7 to jaguar cores? I know there are differences between Intel and AMD, but I have to think that an Intel sandybridge chip compares better with bulldozer, then it does jaguar. Plus, I feel that when going through revisions on dev kits, total system (CPU and GPU) capabilities increase, I would consider moving to jaguar a step down. This would also match some previous rumors of a 16 core CPU, and 85w TDP is not too bad.

Of course this all depends on if those dev kit rumors were true. My knowledge in this regard is far exceeded by my interest, so I could be talking complete BS, which is why I asked. :)
 
One little problem is that Opteron is two dies. So you're looking at 2x315mm² actually. As is, it's a quad channel CPU as well (256bit ddr3)
 
250mm^2 is not big chip, it's a bit below medium size. The SOCs going into next gen systems will most likely dwarf a 250mm^2 chip.

It's only big by Nintendo standards.
Neither MSFT or Sony wnet that far with their previous and costly designs.
I can't find the Nvidia presentations as I've no idea about what the title was but the overall costs (designing, testing, producing) of a 250mm^2 may be worse now than it would have been in 2005.

Your claim that the SOC is going to dwarf 250 is based on nothing but your opinion.
I actually based my pov on this KK interview, discussed in its time here, especially this part:
Cell has 8 embedded "SPE" CPU cores. What is the basis for this number?

Because it's a power of two, that's all there is to it. It's an aesthetic. In the world of computers, the power of two is the fundamental principle - there's no other way. Actually, in the course of development, there's this one occasion when we had an all-night, intense discussion in a U.S. hotel. The IBM team proposed to make it six. But my answer was simple - "the power of two." As a result of insisting on this aesthetic, the chip size ended up being 221mm2, which actually was not desirable for manufacturing.

In terms of the one-shot exposure area, a size under 185 mm2 was preferable. I knew being oversized meant twice the labor, but I on the other hand, I thought these problems of chip size and costs would eventually be cleared as we go along.
You will notice that all the chip in the 360 are below that threshold. So is the RSX in the PS3 may IBM won the argument so would have been the Cell.

I don't know if that limitation still hold, I once asked and got no response though.
But if it is still there, blending in Nvidia comments about overall raising costs, it makes the option to go with bigger chips pretty not that sexy for a mass produced device.

EDIT
I just wanted to add that the WiiU silicon budget is far form ridicule @ ~190 mm^2. Putting away the requirements (BC, low power, and so on) Nintendo set for its system without questioning their validity (OT and discussed elsewhere), if one new actor had that silicon budget, matched with a medium power budget (say ~75 Watts), even sticking to TSMC 40 nm (so passing on quiet costly 32nm) I think it could have come with a quiet sexy, affordable, design that may have been a proper and valid link between this gen and the upcoming one. (even though there seem to be a shortage for "good" CPU as Nintendo may have found out. By good I mean not big, power efficient, with sane single thread performances, potent 4 wide SIMD, that support quad core configurations, and readily available on a 45nm lithography. I can't think of none).
 
Last edited by a moderator:
Since they reserved one SPE for the OS, and one SPE for redundancy, the difference between 185nm and 221nm was 50% additional SPE for games. Was that such a bad decision?

According to iSuppli, at launch the Cell was costing them LESS than the 360 CPU ($89 vs $106). Are they wrong? I think they are wrong on so may things, this is probably just one more thing, but I thought I'd ask.
 
Since they reserved one SPE for the OS, and one SPE for redundancy, the difference between 185nm and 221nm was 50% additional SPE for games. Was that such a bad decision?

According to iSuppli, at launch the Cell was costing them LESS than the 360 CPU ($89 vs $106). Are they wrong?
I think they are wrong on so may things, this is probably just one more thing, but I thought I'd ask.

probably, since as i recall cell was bigger, and had more percentage of logic transistors (as opposed to dumb cache where defects are more able to be worked around) leading to it likely being more difficult to get good yields on.

the big cost driver in launch ps3 was obviously blu ray, but i always figured cell might have been a distant 2nd runner up.
 
Status
Not open for further replies.
Back
Top