Predict: The Next Generation Console Tech

Status
Not open for further replies.
That is the beauty of 2.5D. You dont stack power hungry chips on top of each other, but next to each other on a common silicon interposer, which still gives most of the advantages with regard to fast and power efficient communication that 3D stacking offers.

Sony seems pretty keen on this technology.

does this have the same problem as the pentium pro?
a pentium pro would have a CPU die next to one or two full speed L2 dies, intricately connected.
Intel would manufacture and assemble both dies, then only after would find whether it worked at all. effectively multiplying the yield ratios, then adding the difficulty factor in assembling the dies.

pentium pro with 1MB L2 cache used two L2 dies and had a stratospheric price :LOL: (I remember $17K but not sure about it, maybe 17k french francs so 5x less). And by reading stuff around a decade later I found out it didn't even bring any real performance benefit over lesser models, except for not totally crumbling in a quad CPU set up.
 
But the only thing I've seem to have heard from nVidia was the assumption that they would make a GPU for a console only because no one could handle all three. I had heard people talk about nVidia saying they were handling a console in the past, but that was my first time seeing the actual context. The few rumors I have seen about the PS4 have all pointed to an AMD GPU.

Well I am certain MS wont go Nvidia (at least, 95% certain) as MS is big on backward compatibility. So that would make all 3.

I would still be a little surprised if Nvidia doesn't end up in the next Playstation.
 
But the only thing I've seem to have heard from nVidia was the assumption that they would make a GPU for a console only because no one could handle all three.

Can you post a link to that interview? I'd like to confirm it.

I dont think it would be a problem to handle all three technically, it's not like AMD has to go out and fab them. Engineering bandwidth might be another matter, but if they can do two already three is only another incremental step not a sea change.
 
Not sure how applicable it is, but a true 3D chip, with processors and ram, taped out in october from Sony-Ericsson:
http://eda360insider.wordpress.com/...the-advanced-notes-can-you-say-tour-de-force/
"The finalized Wide I/O spec operates at 266 Mtransfers/sec using an SDR (Single Data Rate) protocol, which delivers 17 Gbytes/sec of data throughput via four parallel, independent, 128-bit memory channels (4.26 Gbytes/sec/channel)."

This is at minuscule cell-phone power level. So even if they end up doing "only" 2.5D, could we get something like a terabyte or two per second?

Yes I've seen that. That is ST-Ericsson, they are independant of Ericsson and have nothing to do with Sony. In fact Nokia is using their cpus in their Windows phones.

I think all cpus using the wideIO dram are candidates for this technology, some believed that the cpu for the i-Pad3 would use this as well.

http://www.eetimes.com/electronics-news/4234961/TSMC-manufacturing-process-in-trouble?pageNumber=1

In August 2011 TSMC was reported to have begun trial manufacturing of an A6 processor design for Apple but to be redesigning the chip for volume production in the first quarter 2012. One potential reason of the respin is that TSMC plans to use 3-D stacking technologies along with its 28-nm manufacturing process in the production of the A6 for Apple. The use of a specialized silicon interposer and bump-on-trace interconnect may produce specific requirements in the main processor die.


@Blazkowicz. The technology still comes at premium price, but obviously the yields are good enough to be competetive in high-end smart phones. That is also where the benefits are most pronounced, high performance and low power requirements. A lot of companies are looking at how to bring the technology to high performance computing, maybe we will see some announcements during this year.
 
Can you post a link to that interview? I'd like to confirm it.

I dont think it would be a problem to handle all three technically, it's not like AMD has to go out and fab them. Engineering bandwidth might be another matter, but if they can do two already three is only another incremental step not a sea change.

Oh it was the one Crossbar posted on the last page.

Who are the sources behind the idea that AMD got the GPU contracts of all the next gen consoles?

Sorry I missed this. The one I remember was a HardOCP article back in May that said AMD had all three console GPUs.
 
Last edited by a moderator:
Hard also confirmed Next Box was using a modified Cell CPU. Disregard them.

I can't completely as they are showing to be correct on the GPUs. The problem with that is what did they mean by "Cell CPU". I think most of us would take that to mean something else compared to what they probably meant by that.
 
Well they then directly contradicted their own Cell rumors shortly after...so there's no defending them there.

We reported earlier this month in our "E3 Rumors on Next Generation Console Hardware" article that Microsoft's next-gen Xbox would likely be sporting a new IBM cell processor, although we did suggest that was not written in stone. We are hearing this week that AMD has very likely locked up the whole shebang with a Fusion Bulldozer variant APU.

Yeah, right.

Anyways I dont think the AMD for all consoles thing originated with them, and just stating either AMD or Nvidia in any console you'd have a decent chance to accidentally be retroactively right, since there's only two choices.
 
Well they then directly contradicted their own Cell rumors shortly after...so there's no defending them there.



Yeah, right.

Anyways I dont think the AMD for all consoles thing originated with them, and just stating either AMD or Nvidia in any console you'd have a decent chance to accidentally be retroactively right, since there's only two choices.

I always wondered where that came from. But yeah I never said they were the first to say it. They were just the one I remembered saying it. And not defending them, but it could be said they are getting their info crossed as Semiaccurate did say that MS was using an SoC and the GPU will come from AMD. Also while I see why you are saying that, to say all three would go with AMD isn't something that could be treated as "expected".
 
Last edited by a moderator:
@Blazkowicz. The technology still comes at premium price, but obviously the yields are good enough to be competetive in high-end smart phones. That is also where the benefits are most pronounced, high performance and low power requirements. A lot of companies are looking at how to bring the technology to high performance computing, maybe we will see some announcements during this year.

Quite likely it seems.

It’s clear that the initial Wide I/O specification with its peak bandwidth of 4.26 Gbytes/sec/channel will not satisfy all bandwidth requirements for all future systems. One would not expect that. So 3D memory standards are in the cards and according to Liam Madden, the corporate VP at Xilinx in charge of that company’s 2.5D FPGA efforts, next-generation Wide I/O JEDEC standards are in the works that target Tbit/sec data rates for high-performance requirements. These new standards will also explicitly support 2.5D and 3D memory stacks.
 
I think the main thing here is if the PS4 is a late 2013 console at earliest which evidence seems to point to. No one may have the contract yet. Their still can be multiple competing designs at this point with no one having won the contract yet. So it could be one the other or neither at this point.
 
Not sure how applicable it is, but a true 3D chip, with processors and ram, taped out in october from Sony-Ericsson:
http://eda360insider.wordpress.com/...the-advanced-notes-can-you-say-tour-de-force/
"The finalized Wide I/O spec operates at 266 Mtransfers/sec using an SDR (Single Data Rate) protocol, which delivers 17 Gbytes/sec of data throughput via four parallel, independent, 128-bit memory channels (4.26 Gbytes/sec/channel)."

This is at minuscule cell-phone power level. So even if they end up doing "only" 2.5D, could we get something like a terabyte or two per second?

Well 4 - 4 chip stacks at 4ghz would have a bw a little over a Tbyte/sec and that's nothing more exotic than multiplier effect, so yea. That's 8 GBs at a 2 GB density. Remember also that you can attach the stacks to the bottom/otherside of the interposer so it'd fit comfortably on a 400mm^2 interposer with a cpu and gpu on the opposite side (possibly even an rf chip too). Jedec is supposed to be finished with the HP version of wide IO in the next 6 months and there's been talk of a terabyte standard among other things so we'll see.

Another advantage is being able to do split chips. A split chip gpu would allow better yields advantages with wafer layout and sorting along with the normal yield enhancers like clocking and disabling. You can also do very simple power gating by powering down one chip when not needed. On the cpu side, you can split out your L3/edram from the rest and benefit from utilizing cheaper stock (bulk cmos vs soi) and easier processes (4 metal layers vs 10-12) and again improve yields.

Powerwise, one could expect such a stack chip with power savings similar to a process shrink or around 40%. So for a 28nm system conventionally arraigned and having a tdp of 200w, you could realize the same in 2.5d stack at around 120w. That's a nextbox slim out of the gate and you still have a 20nm process shrink in your hip pocket to use.
 
It's funny to see how both companies "AMD & Nvidia" are contradicting to each other in their statements. Technically speaking one should put more credibility in Jen sung Huang's words since he is still NV's CEO over an ex AMD employee. I would be very surprised if Sony switched company this soon. But in the end I think it's better for Sony to side with NV mainly due to backwards compatibility reason.

I don't see BC as much of an issue since anything NV put out now would be decidely different from RSX. I think what could fairly be said though is that any NV offering would have to be a fully custom one. There's no way they'd get away with doing an iteration of their current gpu's like they did with RSX.
 
I don't see BC as much of an issue since anything NV put out now would be decidely different from RSX. I think what could fairly be said though is that any NV offering would have to be a fully custom one. There's no way they'd get away with doing an iteration of their current gpu's like they did with RSX.

Why not?

If they dont go with EDRAM, or even if they do I guess, I've never seen the point too much of a fully custom job like Xenos. Bet that was a one off.

NOT being custom should allow you a much shorter potential lead time. And these days, I doubt there's some kind of advanced tech not already in the desktop line like say, unified shaders in Xenos day.

Also while Xenos was arguably as powerful or more as anything in desktop at the time, this time around that almost certainly wont be the case for any console, further lessening the point of custom. Beyond of course things like memory bus and EDRAM.
 
Why not?

If they dont go with EDRAM, or even if they do I guess, I've never seen the point too much of a fully custom job like Xenos. Bet that was a one off.

NOT being custom should allow you a much shorter potential lead time. And these days, I doubt there's some kind of advanced tech not already in the desktop line like say, unified shaders in Xenos day.

Also while Xenos was arguably as powerful or more as anything in desktop at the time, this time around that almost certainly wont be the case for any console, further lessening the point of custom. Beyond of course things like memory bus and EDRAM.

And even Xenos wasn't "completely custom", it was after all derived from the R400-project which couldn't be done at the time it was planned for.
 


Too big, too power hungry, and too hot for an iteration in the performance envelope they could get elsewhere.


Additionally, IMO it would need to be NVidia to initiate and propose a new gpu. If they think they can wait until Sony comes hat in hand to them, they're going to be seriously disappointed.
 
I'm really curious about what these companies internal development schedules really are like. There were pretty widespread rumors a couple of years ago about Sony evaluating Larrabee for PS4. I don't know how much credit to give to that, but it at least suggests that they do have to be planning for a console release quite a few years in advance.

I can't wait for all of the players to actually reveal what they've been up to all this time.
 
I'm really curious about what these companies internal development schedules really are like. There were pretty widespread rumors a couple of years ago about Sony evaluating Larrabee for PS4. I don't know how much credit to give to that, but it at least suggests that they do have to be planning for a console release quite a few years in advance.

I can't wait for all of the players to actually reveal what they've been up to all this time.

Sony checked out larrabee and what I hear from guy who used to work on larrabee is that sony wasn't much impressed mainly due to the power and heat requirements.
 
Status
Not open for further replies.
Back
Top