Predict: The Next Generation Console Tech

Status
Not open for further replies.
I've noticed long standing trend of consoles having about the same amount of RAM as high end videocards released around the same time period, a historical precedent which leaves even 4GB as extremely wishful thinking.
 
I've noticed long standing trend of consoles having about the same amount of RAM as high end videocards released around the same time period, a historical precedent which leaves even 4GB as extremely wishful thinking.

Well, there's also a trend of an average of 12x the RAM of the previous generations of consoles (at least the last three). And 2GB would be 4x. So that would be quite the trend-breach, especially for a generation that has lasted longer than most previous generations. So I would expect 4GB minimum, but would definitely not consider 8GB out of the question, if next gen happens late 2013 or 2014. In that respect the 32MB to 640MB in Vita is a similar timeframe. It would be weird to me if a home console released 1-2 years after Vita contians only 3x the memory of that handheld device. That said, whatever it is, it is generally likely to be disappointing. ;)
 
Well, there's also a trend of an average of 12x the RAM of the previous generations of consoles (at least the last three). And 2GB would be 4x. So that would be quite the trend-breach, especially for a generation that has lasted longer than most previous generations. So I would expect 4GB minimum, but would definitely not consider 8GB out of the question, if next gen happens late 2013 or 2014. In that respect the 32MB to 640MB in Vita is a similar timeframe. It would be weird to me if a home console released 1-2 years after Vita contians only 3x the memory of that handheld device. That said, whatever it is, it is generally likely to be disappointing. ;)

I don't think it is a question of can it technically be done. I just don't think Sony can afford to do another beastly console (Assuming late 2013 launch) and Microsoft seem to heading more in the Kinect/casual/living room direction and don't require so much power.

This could all change if they decide to wait till 2014/2015 though. Hopefully, we get a clearer picture come E3....
 
The amount of RAM really wasn't the beastly part of current gen. It's the type of memory, system wide bandwidth, processing chips, storage, etc
 
That's only applicable to one console though. I don't think it's a good metric to use.

Obviously the memory available (which has been tied to process advancements) and the cost of such are major factors but just looking at 3D consoles the trend is on the high side.

3D Console Gen 1: 1994 (SS, PS), 1996 (N64)
Saturn: 5.5 (crazy)
PS1: 3 (2 + 1)
N64: 4

Mean: 4.16

3D Console Gen 2: 1998 (Dreamcast), 2000 (PS2), 2001 (GCN, Xbox)
Dreamcast: 26 (16 + 8 + 2)
PS2: 36 (32 + 4)
GCN: 43 (24 + 3 + 16)
Xbox: 64

Mean: 42.25
Increase: 10.16x

3D Console Gen 3: 2005 (360), 2006 (PS2)
Xbox 360: 522 (512 + 10)
PS3: 512 (256 + 256)

Mean: 517
Increase: 12.24x

People can flip this all they want. You can look by company, by years, by architecture, etc all which tends to come up on the 8x (+) side; if you are looking at what current technology offers in 2012 8x is hard to swallow if some of the current design trends are in view. But design pressure points change, but so has the economy of designing. A lot of it may come down to developers telling Sony/MS, "Yeah, the PC market does ABC, but this is a console and if you give us x-amount we can do such and such." Or they may do something totally different (e.g. smaller RAM backed by a fast SSD).
 
The Alienware X51 comfortably fits 8GB of memory (plus another 1GB of video) into a console size/power envelope -- but inefficient -- stock PC parts. Is the only reason 8GB of memory (which, not to mention, is cheap... very cheap) out of the question because we are assuming last-gen design considerations? If the inefficient stock PC design can fit in a DRAM module bus, why not a console?
 
Acert, you forgot to include the Wii in your numbers for Gen 3.
 
What if you had to choose between 1GB of very fast VRAM and 6GB of DDR3 (CPU) or 2GB total fast UMA.
hmm...So basically 4x2Gbit GDDR5 + 12x4Gbit DDR3 (8Gbit exists, though you'll have even slower speeds), which implies a 128-bit GDDR5 bus (64-bit in clamshell) + 192-bit DDR3.

Hypothetically speaking, that gives us around 80GB/s for GDDR5 @ 5Gb/s. If you want to start pushing the memory bus design i.e. fatter memory bus (due to accommodating the electrical issues at higher frequency), you could go for 6Gb/s or 96GB/s on the 128-bit bus. Of course, 192-bit 1600MHz DDR3 gives us 38.4GB/s (higher speeds might be available with 4Gbit by then, but this is ballpark).

Alternatively, you could have 8x1Gb GDDR5 which will allow you to have a 256-bit bus, though now you're increasing the number of chips. You'll need a fair bit of area on the mobo if you want to keep the number of PCB layers down. I mean... If you do want to minimize motherboard size by packing the memory chips closely together, you'll need more layers, but then it's a waste considering this is the only region of the entire board making use of said layers. You're practically making a PC graphics card at that point, but with enormous waste considering the rest of the motherboard components.

16 chips surrounding an SoC is probably not that awful (not ideal, see below)- 8 chips per motherboard side, 2 chips per side of the chip at a distance similar to what's observed on 360. At that point, why bother splitting the RAM and just go for a unified 4GB of GDDR5? The bus could be 256-bit with clamshell mode (512-bit would make the minimum die size lol-huge. At least with 256-bit, you could still make a larger chip, but allow for reductions over time more easily). Won't need edram to save the day either.

Mind you, in such a packed scenario, the dev kits would either have to be limited to the same memory configuration (just like 360 dev kits until years later when higher density chips became affordable), or they would have to have lol-spensive dev kits that don't resemble the retail unit at all. i.e. it's not lumped together in the same manufacturing line, has considerably more expensive design to accommodate double the RAM etc. If you think about it, the main difference between retail and dev kit units is the amount of RAM. For the current 360 dev kits, that's the difference between four and eight 1Gbit chips. The rest of the manufacturing is exactly the same. Well... there's the funky HDD, but that is a physical add-on.

Decisions, decisions...

----------

Short version: too many variables and trade-offs for design and cost, and even performance. This really isn't as simple as many forum folk would like to believe.

------------------

I think it'd be funny if they allocated a single 4Gbit DDR3 chip just for the OS. :p
 
If you have 8 DDR3 memory chips on a 64 bit memory bus, how is the data and addressing handled?

Is each memory chip capable of handling 32 data traces? And if so do the same 32 traces get grouted to 4 memory chips? And how is byte addressing handled - is it in a straight sequence where you use up first 1/8 of the addressable range of memory on the first chip, then the 2nd 1/8 on the second? If so, this seems like it could be slow for pulling out a 64 bit variable stored sequentially in memory, but if the memory controller could allocate byte addresses in blocks of 4 across different parts of data bus you could speed this up. Maybe?

It's occurred to me that I know bugger all about this, and I can't find anything on a quick websearch that explicitly states what is happening (possibly because I don't know what I'm looking for). Anyone know?
 
The Alienware X51 comfortably fits 8GB of memory (plus another 1GB of video) into a console size/power envelope -- but inefficient -- stock PC parts. Is the only reason 8GB of memory (which, not to mention, is cheap... very cheap) out of the question because we are assuming last-gen design considerations? If the inefficient stock PC design can fit in a DRAM module bus, why not a console?

Might want to consider the cost of manufacturing. Someone's gotta install those RAM sticks (be it desktop or SODIMM, though SODIMMs cost more too). It just adds time to manufacturing the console. For a single person, no problem, but then you're talking about how many thousands to manufacture and spit out at the end of the month? Things could add up since it's less automated. I mean, there's probably a good reason why even Xbox 1 didn't have DIMMs. ;)

Latency - Point-to-point is going to be a pretty big advantage, especially if you're considering a UMA design.

Just thought I'd throw that out there.
 
If you ditch edram, there's space for it. :p The edram I/O is a pretty big chunk of perimeter itself.

I mean... if you look at the 360, it could have been 256-bit, but the bandwidth wouldn't have justified the cost (44GB/s). Nowadays, you could have double frequency and double rate (GDDR5) to bring bandwidth within ballpark range of that edram bandwidth for that bus size.
 
Will there still be room after two dieshrinks?

If you're doing a separate CPU and GPU, then eventually the merging of the two brings the digital circuit area up again. You see this in the progression of 360 pretty clearly.

If you have an SoC, you'll want to have limits in place to facilitate future shrinks though. But... how small is your combined CPU/GPU anyway? :p The question is... how low will you go..
 
If you have an SoC, you'll want to have limits in place to facilitate future shrinks though. But... how small is your combined CPU/GPU anyway? :p The question is... how low will you go..

A SoC, while possible, is indication of drastic reductions in area from last gen. At that point, if someone goes with a SoC, their options for memory (size, bandwidth) become much more limited and less interesting, especially when you start to consider they need to plan a SoC where their memory interface has to be small enough to accommodate future reductions or moving their memory onto the SoC as well.
 
If your performance target is somewhere in SoC land you may not need more than a 128 bit bus to keep everything fed. Might not need any edram either.

Hopefully we'll get enough memory to store a holster animation next generation.
 
If you have 8 DDR3 memory chips on a 64 bit memory bus, how is the data and addressing handled?

Is each memory chip capable of handling 32 data traces? And if so do the same 32 traces get grouted to 4 memory chips? And how is byte addressing handled - is it in a straight sequence where you use up first 1/8 of the addressable range of memory on the first chip, then the 2nd 1/8 on the second? If so, this seems like it could be slow for pulling out a 64 bit variable stored sequentially in memory, but if the memory controller could allocate byte addresses in blocks of 4 across different parts of data bus you could speed this up. Maybe?

It's occurred to me that I know bugger all about this, and I can't find anything on a quick websearch that explicitly states what is happening (possibly because I don't know what I'm looking for). Anyone know?

Bog standard DDR3 is max 16 bit wide, but last fall Samsung and Elpida announced LP-versions of DDR3 that actually are 32 bit wide, that are currently sampling.
http://techon.nikkeibp.co.jp/english/NEWS_EN/20120222/205615/

If you arranged 8 units (4 GB) on a 256 bit bus you would get about 50 GB/s bandwidth. Not that impressive, but if you have a separate fast (EDRAM?) memory for your frame buffer, maybe it is not that bad.

I've been considering this alternative for a while since a memory size of less than 4 GB for the next gen consoles does not feel as an attractive option. ;)
 
Since SoCs usually have a lower cost than multi-chip systems, what is the probability of us seeing them in consoles. Are there any size limits to an SoC?
 
Status
Not open for further replies.
Back
Top