Predict: The Next Generation Console Tech

Status
Not open for further replies.
512 bit bus is what Intel uses on Haswell GT3 (128MB side memory). On PS4, especially with a much bigger GPU, perhaps you ought to have at least a 1024bit bus. As for the other memory, could still be gddr5 on 128bit bus.

If bandwithes are so critical then you don't necessarily need to choose between gddr5 only and ddr3 + edram/special ram. Do both :). PS4 could have 2GB gddr5 + 512MB side ram, and texture from the gddr pool with no problem. This gives you the highest performance, though capacity is a bit low.
 
512 bit bus is what Intel uses on Haswell GT3 (128MB side memory). On PS4, especially with a much bigger GPU, perhaps you ought to have at least a 1024bit bus. As for the other memory, could still be gddr5 on 128bit bus.

If bandwithes are so critical then you don't necessarily need to choose between gddr5 only and ddr3 + edram/special ram. Do both :). PS4 could have 2GB gddr5 + 512MB side ram, and texture from the gddr pool with no problem. This gives you the highest performance, though capacity is a bit low.

Oh! Do you imagine? 1024 bits width bus for the CPU and the GPU!. The system of the forever!.
 
I've no idea who Arthur Gies is, but there was never a plan to launch a next gen console this year.

I sometimes pay attention to what Arthur says, I don't think he purposely tries to spread false rumors, but may just be hearing wrong information. I remember him commenting on publishers being unhappy with MS not launching this year, but like you said, there has never been any indication that a 2012 launch was ever planned. I've even read the complete opposite that MS were originally planning to launch in 2014 until Nintendo announced the Wii-U. Not saying that's true either, but I believe that before I believe any rumor about a possible 2012 launch.

It says heavily engineered, I'm expecting something more than just eDRAM slapped on it. Giving that it's going to use as a baseline the Sea Island-architecture, i'm expecting they will tune it to be more future proof. And I'm sure we will see some new features as well, beyond the DirectX11.1 standard.

This is another comment from Arthur that I question. He compares the jump in design to what we saw with the Xenos back in 2005. Is there even an architecture shift on the horizon at AMD to back up this claim? Unified shaders and DX10 were right around the corner when the 360 was launched, but I'm not aware of any major architecture advancements scheduled to be released in the near future from AMD.

I just have my doubts we'll see anything new on the same level as Xenos when the 360 launched. I don't see what direction they can go, but I guess that's why I'm not an engineer designing GPUs. :p
 
This is another comment from Arthur that I question. He compares the jump in design to what we saw with the Xenos back in 2005. Is there even an architecture shift on the horizon at AMD to back up this claim? Unified shaders and DX10 were right around the corner when the 360 was launched, but I'm not aware of any major architecture advancements scheduled to be released in the near future from AMD.

I just have my doubts we'll see anything new on the same level as Xenos when the 360 launched. I don't see what direction they can go, but I guess that's why I'm not an engineer designing GPUs. :p

A GPU optomized for voxel ray casting.
 
This is another comment from Arthur that I question. He compares the jump in design to what we saw with the Xenos back in 2005. Is there even an architecture shift on the horizon at AMD to back up this claim? Unified shaders and DX10 were right around the corner when the 360 was launched, but I'm not aware of any major architecture advancements scheduled to be released in the near future from AMD.

AMD were playing with Unified shaders up to a couple of years before Xenos my friend....

What we considering a new architecture, to AMD, Nvidia and Intel it's old..
 
512 bit bus is what Intel uses on Haswell GT3 (128MB side memory). On PS4, especially with a much bigger GPU, perhaps you ought to have at least a 1024bit bus. As for the other memory, could still be gddr5 on 128bit bus.

If bandwithes are so critical then you don't necessarily need to choose between gddr5 only and ddr3 + edram/special ram. Do both :). PS4 could have 2GB gddr5 + 512MB side ram, and texture from the gddr pool with no problem. This gives you the highest performance, though capacity is a bit low.

The vita has a 1024 bit memory bus via stacked chips so it wouldn't surprise me in the least.
 
AMD were playing with Unified shaders up to a couple of years before Xenos my friend....

What we considering a new architecture, to AMD, Nvidia and Intel it's old..

Indeed, unified shaders were supposed to come right after R300
 
Oh, you mean the guy that said Sony's chips came back first and were looking good while AMD had to throw out a ton of work on Durango and start from scratch? That's the info we're relying on to say Durango will get to market first?

I never believed Durango was definitely going to be first out, but i definitely believed everything that Sweetvar26 said had a basis in facts, the descrepancies coming out of old docs or lack of understanding of the info he got.

I remember him saying Orbis silicon were a few months early, but I am pretty sure he didn't talk about MS throwing out work and starting from scratch.

Any comments made about Orbis / Durango delays came from Charlie. Orbis only delays were from Kotaku.
 
Is the best next generation console going to be the one with a measurable advantage in shader power or memory amount over the other?
 
The problem I think is that the stackable memory standard may or may not be ready on time. I'm really hoping for it though.

Is there any information out there about how much memory could be stacked, if Sony were to go that route? I imagine it'd be up to a single 4 gigabit chip, giving 512MB of fast memory, yes?
 
DDR4 and GDDR5 4 gigabit chips will both be present in the wild during 2013. But gddr5 cant be stacked high [ddr4 can go up to 8 layers].
 
Is there any information out there about how much memory could be stacked, if Sony were to go that route? I imagine it'd be up to a single 4 gigabit chip, giving 512MB of fast memory, yes?
Vita was using a slightly custom version, because work was ongoing before it was standardized.
Current low-power standard being produced (Vita and cellphones) is 200MHz or 266Mhz SDR, 512bit wide. up to 4Gbit/layer, up to 4layers being produced, more possible. That's only 17GB/s per chip, and putting many of them on an interposer wouldn't probably be cost effective.
Planned quick upgrades for higher power devices (GPU cards, Consoles, etc...), using current memory production process, should be 266Mhz DDR or up to 1066Mhz DDR. They discussed the possibility of using 1024bit wide as standard too.

With 16Gbit chips, if they want 8GB, they would need 4 devices wide on an interposer.
Worst case would be 136GB/s at 266DDR, 512bit per chip, 4 chips wide.
Best case would be 1TB/s at 1066DDR, 1024bit per chip, 4 chips wide.

If we follow the timeframe between the Vita release and WideIO standardization, we'd need a new standard in January for a 2013 console to have any chance of using it. So it's not looking good. I don't think they even decided yet, let alone write the standard, and having chips produced. If they don't use a standard, a custom memory would cost a fortune. (this applies to the competing standard HMC, too)
 
Last edited by a moderator:
What if Hybrid Memory Cube (or something derived of) is used in Durango?
Anything that applies to WideIO2 also applies to HMC, I think both are derived from WideIO which came before them, and it all applies equally to either PS4 or 720.
I have no idea what HMC is planning for chip capacity though, they seem to be targeting servers first though, no consoles or portable devices.

I sure hope that won't turn into another Rambus versus DDR, it has the vibes of a company trying to compete against an established standard. Hopefully they won't sue each others :rolleyes:
 
Anything that applies to WideIO2 also applies to HMC, I think both are derived from WideIO which came before them, and it all applies equally to either PS4 or 720.
I have no idea what HMC is planning for chip capacity though, they seem to be targeting servers first though, no consoles or portable devices.

I sure hope that won't turn into another Rambus versus DDR, it has the vibes of a company trying to compete against an established standard. Hopefully they won't sue each others :rolleyes:

From Micron:

I'll speak more about what's happening in game consoles as well. A pretty good push for more memory coming up in the Game Console segment as a level of redesigns. We'll start to hit it over the next couple of years.

And talking about consumer again here. I thought it'd be beneficial to show you across a couple of key applications how this looks in terms of megabyte per system. On the left, what we have are game consoles. This is a space that's been pretty flat for a number of years in terms of the average shipped density per system. That's going to be changing here pretty quickly. I think everyone realizes that these systems are somewhat clumpy in their development. The next generation of system is under development now and that because of 3D and some of the bandwidth requirements, drives the megabyte per console up fairly quickly. So we're anticipating some good growth here.


We've worked with a number of these vendors specifically on both custom and semi-custom solutions in that space.


 
From Micron:
That's very encouraging, I also found Micron saying they are actively participating in the JEDEC groups while also being in HMC group, so it's just two approaches. There probably won't be any of the destructive competition that I feared. Might have support from most memory manufacturers, I know Samsung made the custom WideIO memory for the Vita, maybe Micron is making the memory for MS and that would explain why MS joined the HMC group. The interview also addresses why it's such a big issue with doing something like the Vita memory (stacked chip-on-chip), but with high power chips:

http://chipdesignmag.com/sld/blog/tag/hybrid-memory-cube/
SLD: Is Micron supporting Wide I/O?
Graham: Which version? There is a low-power Wide I/O and then there is a Wide I/O derivative that basically spawned from that group that is in the JEDEC task group right now. It is being explored and they are actually calling it high-bandwidth memory, but it is essentially a Wide I/O effort. Micron is actively participating in both of those.
SLD: How far out is the HMC from being able to be used in production designs.
Graham: Our plan of record is for production to begin in the second half of 2013.
SLD: How do you deal with the heat issue in the HMC?
Jeddeloh: In many of the early instances, we’re going to be part of the processor’s cooling complex. We put a top or lid on it. DRAM doesn’t like heat. It messes up the refresh. If we are not on top of the processor, the heat is manageable. Once you create that low power I/O, which is enabled by changing the locality of the overall system topology—and we’re not creating as much power within the cube itself—then we stack it up and pull the heat out the top.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top