Predict: The Next Generation Console Tech

Status
Not open for further replies.

That's probably risk production. If you look at TSMC, the 28nm volume production is basically starting now or early 2012.

That means volume for the next generation will be 2 years from now...unless you have a compelling reason to believe that they can move to a new node in 12 months. Historically speaking, that has never happened.

Moreover, there is reason to believe that the move to 20nm may be reminiscent of 40nm, with a lot of bumps early on due to variability.

DK
 
That's probably risk production. If you look at TSMC, the 28nm volume production is basically starting now or early 2012.

That means volume for the next generation will be 2 years from now...unless you have a compelling reason to believe that they can move to a new node in 12 months. Historically speaking, that has never happened.

Moreover, there is reason to believe that the move to 20nm may be reminiscent of 40nm, with a lot of bumps early on due to variability.

DK

link said:
"We expect A-15 to be sampling in the first half of next year, to be in full production in Q4 2012, and to be out in hand-sets by the end of next year," says Howarth.

Sounds like production to me.

Not sure if they'll make it by then as it is rather aggressive, but that's the plan.

As I said earlier ... it seems all the ipad/iphone hype may actually bring something good to gaming afterall ... TSMC seems to be very aggressive in pushing these nodes which the mobile devices need for ever increasing processing ability in a portable form (ie they need aggressive node ramps to maintain the fever pace of new models, not just mature processes)
 
You do realize that the whole point of stacked RAM is to not have them connected to the CPU/GPU over a long distance but to have them literally sitting ontop of them instead? If you are going to connect them over PCB with traces you are better off using regular RAM modules instead.

This is true, but there may be some intermidiate solutions before we get there because stacking memory on top of a CPU is not trivial and there are still benefits in terms of small footprint, distributed DRAM controllers etc.

http://semimd.com/blog/2011/11/03/dram-server-roadmap-points-to-ddr4-and-3d/
“Stacking (memory) with CPU involves its own challenges,” he said. “How do we handle hundreds of amps and thousands of high-speed I/O delivered and dispersed throughout the stacks? How do we develop processes and methods to deal with a processor which sits on the top of the stack enabling heat sink attach/cooling?”

By the way that article is quite informative with regard to the DRAM roadmap ahead.

jedec1.jpg


According to JEDEC’s roadmap, DDR3 now comes in two speed grades: 1,333- and 1,600-MHz. An 1,866-MHz version is also on the roadmap. Following DDR3, Elpida, Hynix, Micron and Samsung are developing monolithic parts based on the next-generation interface — DDR4.
The first DDR4 DRAMs will come in 4-Gbit densities and 2xnm processes. The devices will operate at 2,133- and 2,400-MHz, which are slated for delivery in 2014 and 2015, respectively, according to the JEDEC roadmap. A 3,200-MHz and 16-Gbit version is due out in 2020.

Based on a 1.2-volt technology and a 16-bank architecture, DDR4 is expected to be about 30 percent to 40 percent lower power than DDR3. DDR4 also features VDDQ DQ termination, a 500,000 page size for x4 devices, new RAS features and error correction on the SODIMM using a common connector.

Also in 2014, the first 3D-based DDR4 chips are expected to appear in a 20nm, two-stacked configuration, according to the JEDEC roadmap. Four- and eight-stacked DDR4 versions are due out in 2016 and 2017.

Some JEDEC members are floating a 3D technology called high-bandwidth, which stacks DRAM on top of a logic device in a master-slave configuration. This is basically a server version of the so-called wide I/O memory scheme. Geared for handhelds, wide I/O DRAM is a four-channel, 128-lane technology said to enable a bandwidth of up to 12 Gbytes/s.

Roadmap debate

Samsung has other ideas about the DRAM roadmap. At the event, Jang Seok Choi, senior engineer at Samsung Semiconductor, presented a slide that tipped Samsung’s aggressive roadmap. As part of the plan, a 1.25-volt version of DDR3 would appear in 2011, followed by DDR3-based 3D TSV devices at 1,866-MHz in 2012, following by DDR4 in 2013.

Samsungroad.jpg


There is a pressing need for DDR4 by 2013 “to keep server GB (per system density) with the same power budget,” Choi said. “TSV is also the right solution for servers” but “one of the challenges is cost.”
Last year, Samsung announced one of the first 3D TSV DRAMs, an 8-GB memory module based on DDR3. Last month, Samsung and Micron formed a consortium to develop a serial specification for a rival 3D memory technology called the Hybrid Memory Cube (HMC). Targeted for late 2013 or early 2014, HMC will incorporate DRAM arrays stacked on a logic chip.

At the JEDEC event, Kuljit Bains, a memory technologist at Intel Corp., had a slightly different viewpoint, saying DDR4 would remain the dominate DRAM technology at least until 2015 or 2016.

Some believe that DDR4 would run out of gas before then, but Bains disagreed. “We’ve been saying DDR would run out of gas for the last 10 years,” he told SemiMD, “but DDR4 is probably the last evolutionary technology” in the DDR roadmap.

Following DDR4, the DRAM market would most likely shift towards a newfangled 3D scheme, namely the so-called high-bandwidth technology. High-bandwidth would be slightly slower but would feature “lower power transistors,” he said.

Not surprisingly, the server vendors see another scenario. IBM’s Kilmer said “DDR4 capacity, power and bandwidth improvements over DDR3 are well aligned to server needs.”

Beyond DDR4, IBM is evaluating various 3D memory schemes. The shift to those schemes depends upon the “cost effective availability of 3D devices,” he said. “It’s a question of getting the yields for the silicon vias.”

Oracle’s Dee Williams said “DDR4 would take us to 2016, 2017, or maybe 2018,” with a 3D solution coming in the not-too-distant future. Oracle is exploring all options, such as MCMs, silicon interposers and 3D TSVs.
 
I also do not see Cell making a return.

The 360 managed with very limited bc (how many sales would none even have cost it?), no reason Sony couldn't manage similar.

A different era though Alpha.

How many DLC items did you buy on ps2/xbox?

BC will be crucial for the transition in building on the marketshare MS and Sony have.

Imagine how foolish one will look if they don't have BC while the other platform does...

The pros in dumping either Cell or PPE (xcpu) for a more efficient architecture don't outweigh the cons.

It's not as though there is a revolutionary CPU arch that outclasses Cell or Power and sticking with these architectures will cost them dearly.

The dev libraries and tools are mature now for both systems and will enable a smooth transition to nextgen by simply scaling them smartly, not abandoning them outright for a minimal gain in either power efficiency, processing power, or "ease of development".
 
A different era though Alpha.

How many DLC items did you buy on ps2/xbox?

BC will be crucial for the transition in building on the marketshare MS and Sony have. Imagine how foolish one will look if they don't have BC while the other platform does...
The pros in dumping either Cell or PPE (xcpu) for a more efficient architecture don't outweigh the cons. It's not as though there is a revolutionary CPU arch that outclasses Cell or Power and sticking with these architectures will cost them dearly.

The dev libraries and tools are mature now for both systems and will enable a smooth transition to nextgen by simply scaling them smartly, not abandoning them outright for a minimal gain in either power efficiency, processing power, or "ease of development".

How does DLC or downloaded titles make a damn bit of difference? Why does it matter where the game was purchased?

Cell is dead, trying to drag it forward is a severe limitation on Sony's options.

Sounds like production to me.

It's my understanding that the first A15s will probably be 32 or 28nm.
 
How does DLC or downloaded titles make a damn bit of difference? Why does it matter where the game was purchased?

Some people have an entire library which isn't transferable.

If they aren't, there is absolutely no incentive to stick to the platform. With as tight as these platforms are now, any unnecessary edge given should be avoided.

Cell is dead, trying to drag it forward is a severe limitation on Sony's options.

What would be vastly superior to cell for performance per die-size?

A simple modification would allow it to be much more dev friendly by going with a multicore PPE and keep the same SPEs (maybe add 1 for a total of 8). Or a simple direct scale of Cell.

It's not as though Cell is new to Devs anymore and the Cell toolkits are mature.

It's my understanding that the first A15s will probably be 32 or 28nm.

I'd imagine so, but as the quote says, they are planning on 20nm retail product by the end of 2012.
 
Some people have an entire library which isn't transferable. If they aren't, there is absolutely no incentive to stick to the platform. With as tight as these platforms are now, any unnecessary edge given should be avoided.

There's plenty of incentive to stick to the platform, live, psn, exclusive titles, kinect, move etc.

What would be vastly superior to cell for performance per die-size? A simple modification would allow it to be much more dev friendly by going with a multicore PPE and keep the same SPEs (maybe add 1 for a total of 8). Or a simple direct scale of Cell. It's not as though Cell is new to Devs anymore and the Cell toolkits are mature.

Nothing has to be vastly superior, slightly superior and slightly cheaper is plenty of reason to ditch it.

I'd imagine so, but as the quote says, they are planning on 20nm retail product by the end of 2012.

Read that quote again. There will be handsets with A15 by the end of next year does not mean those handsets will have 20nm A15s.
 
Last edited by a moderator:
There's plenty of incentive to stick to the platform, live, psn, exclusive titles, kinect, move etc.

Not if one platform preserves BC and the other doesn't.

If xb720 doesn't have BC, and ps4 does, I'll be jumping platforms in a heartbeat (assuming they are similar in price/performance).

If not similar and there is reason to stick to ones platform which doesn't have BC (xb360 in my case), I can assure you DLC will not be a priority for future purchases.

I don't think I'm alone in this opinion.

Although there may be as many (or more, who knows) that feel as you do, but the end result will be less future sales of DLC and less console sales on the platform without BC.

Nothing has to be vastly superior, slightly superior and slightly cheaper is plenty of reason to ditch it.

Sure, if you do not value BC at all.

Read that quote again. There will be handsets with A15 by the end of next year does not mean those handsets will have 20nm A15s.

:oops:

Completely missed that!
 
A different era though Alpha.

How many DLC items did you buy on ps2/xbox?

BC will be crucial for the transition in building on the marketshare MS and Sony have.

Imagine how foolish one will look if they don't have BC while the other platform does...

The pros in dumping either Cell or PPE (xcpu) for a more efficient architecture don't outweigh the cons.

It's not as though there is a revolutionary CPU arch that outclasses Cell or Power and sticking with these architectures will cost them dearly.

The dev libraries and tools are mature now for both systems and will enable a smooth transition to nextgen by simply scaling them smartly, not abandoning them outright for a minimal gain in either power efficiency, processing power, or "ease of development".

I think you are giving more credit to BC than it deserves. IMO that was more of a novel idea that Sony promoted with PS2 because it was a second gen disc-based console. That novelty also wore off pretty fast IMO by the time the current gen arrived as Alpha pointed out. And if BC for Xbox3 depends on the same factors as pointed out for PS4, then neither will have it and it won't really matter from a competitive standpoint.

I also think a con of recycling Cell is that yes development for them has matured, but to gain power they'd have to just turnaround and most likely make it more complex than before. I don't see Sony wanting to start off with that kind of learning curve again. Even it were shorter than with PS3, that could still take a long time. On the same token would there really be much of a gain in power for something like a 6-core Xenon? I don't see it, but someone more knowledgeable on PPEs can confirm that for me. Seems that they could just go with a slightly modified CPU from AMD that would have more power, be affordable, and at the same time devs would be very familiar with it so there's very small learning curve.

That's my view so I'd like to understand what "scaling them smartly" means to you?
 
The 360 managed with very limited bc (how many sales would none even have cost it?), no reason Sony couldn't manage similar.
Did original XB actually have anything of value people might have wanted to play that didn't get ported over to 360?
With how prevalent piracy is on PC, I wouldn't be surprised to see Epic veering further away from PC development.
Valve doesn't seem to agree: http://www.eurogamer.net/articles/2011-11-28-valve-piracy-a-non-issue-for-steam
 
DLC is the path that all platform holders wants to go.

  • No middleman cost
  • Low distribution cost
  • No second hand market
  • Little piracy
  • Profit from reselling old titles (BC is important because of that)
I'd add "Significantly more convenient for the end-user" to the list as well, though I'm on 150Mbit net without any caps :)

In general DLC is awesome but I wouldn't be surprised if instead of directly running the games from old platform you'd instead be forced to re-purchase them for the new one in some kind of "HD" form (just upscaled with a bunch of filters or rendered at higher resolution/AA). If they'd give a decent discount for people that already own the original I'd actually be quite happy with it.
 
There's a whole other thread on the worth of Cell, and another on the value of BC. No need to dilute this thread with those discussions.
 
Sounds like production to me.

Not sure if they'll make it by then as it is rather aggressive, but that's the plan.

As I said earlier ... it seems all the ipad/iphone hype may actually bring something good to gaming afterall ... TSMC seems to be very aggressive in pushing these nodes which the mobile devices need for ever increasing processing ability in a portable form (ie they need aggressive node ramps to maintain the fever pace of new models, not just mature processes)

There are big differences* between producing a mobile part (Low Power transistors) and well... just about everything else you would expect for a desktop or console (General Purpose transistors). Again I don't see the relevance of comparing mobile part production to that needed for a home console.

*Different power characteristics (e.g. static power), performance, production process (doping, transistor design and materials).

-------------


On another note, Page 9 has an interesting breakdown of tape out % by applications for 55nm LP and GP-type transistors.
 
The PS4 could simply have a 3-4 core modern PPC CPU with 6-8 SPUs on die. At 28nm or lower the SPUs would be tiny. So that would serve any BC requirements and allow a powerful enough modern CPU to handle next-gen workloads.

I personally don't see either of the two platform holders going OOE for the CPU. As much as a few like to shout about it. All future consoles from here on out will dedicate more silicon to their GPUs anyway, so console CPUs will never be UBER powerful when ranged against the baddest and best PC intel CPUs. At the same time the console CPUs will need to be able to feed data fast enough for the GPU and so you won't want to design a system sacrificing CPU performance for ease of programming with OOE when there's little benefit. Current console CPUs are IOE for a reason and i don't think anything will change next gen.
 
I personally don't see either of the two platform holders going OOE for the CPU. As much as a few like to shout about it. All future consoles from here on out will dedicate more silicon to their GPUs anyway, so console CPUs will never be UBER powerful when ranged against the baddest and best PC intel CPUs. At the same time the console CPUs will need to be able to feed data fast enough for the GPU and so you won't want to design a system sacrificing CPU performance for ease of programming with OOE when there's little benefit. Current console CPUs are IOE for a reason and i don't think anything will change next gen.

Does anybody even design in-order CPUs anymore ? Low performance crap for the mobile market not withstanding.

Even ARM went OOO with Cortex A9/A15. It is simply the most power efficient way to increase performance.

Cheers
 
It's hard to imagine the CPUs will not do OOE, as long as IBM has an OOE solution to sell and clock rates can be maintained.
 
There are big differences* between producing a mobile part (Low Power transistors) and well... just about everything else you would expect for a desktop or console (General Purpose transistors). Again I don't see the relevance of comparing mobile part production to that needed for a home console.

*Different power characteristics (e.g. static power), performance, production process (doping, transistor design and materials).

-------------


On another note, Page 9 has an interesting breakdown of tape out % by applications for 55nm LP and GP-type transistors.

Thanks for the link!

Good breakdown of utilization.

14% of general purpose 55nm production was for GPU's

Side note on 20nm 2012 production, I misread the report 20nm will not provide retail product in 2012.


As for what I meant by that statement of iphone/ipad bringing benefits, if they are in such demand that TSMC sees these demanding low-power/high-performance needing new nodes and that they may profit (even more) from them and that leads them to an aggressive ramp up of node reductions, then surely these advancements will translate to other processes for higher power on the same node.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top