Predict: The Next Generation Console Tech

Status
Not open for further replies.
PC gaming suffers limited specs too. We are way past the point people needed to upgrade for non-gaming needs (except relative niches such as content creation), and people get laptops.

the future of PC gaming looks unpredictable, by 2016 a huge number of people will have a dual core machines with 2/4GB ram and a weak 512MB/1GB GPU (current machines + cheapos of then), other people will have a 8 core, 8GB PC with a 4 teraflop, 4GB GPU, or who knows what new heteregenous architecture.
 
Sony allegedly (possibly) moving away from Cell towards PC style multicore for PS4, Larrabee nixed etc.

Hmm, wonder what happened to that Intel manycore roadmap that was posted so much in defense of Cell? Now it at least appears Sony may be moving the opposite direction. Although an SPU2 setup is also discussed.
 
So lets discuss: pros and cons of a split memory console on a console. e.g.

1GB of fast GPU memory
3GB (or 6GB) of slower memory for the CPU

Maybe not ideal, but given budgets going the "PC" route may allow a lot of cheap memory, provide very fast GPU memory bandwidth, and with a huge CPU-pool a lot of graphic assets can be in the general memory pool for quick transfer to the GPU.

I see PCs shipping all the time, cheap PCs where the vendor and the retailer make a profit, that have 4GB or 6GB of memory.

(As an aside, I have seen a large number of netbooks with 2GB of shared memory for under $300 so I am not sure the above discussion about netbooks having paltry video memory applies across the board--not that netbooks are a good parallel. My Acer 1410 has a ncie display, Core2 Solo, 2GB decent memory, 250GB HDD, Windows 7, WiFi, all in a very small form factor for under $400. Netbooks are different animals than a console due to size and business model).
 
I always thought the main advantage provided by a unified memory pool is ease of development. No need to chop things up to fit into split memory pools when you're hitting limits.
 
I always thought the main advantage provided by a unified memory pool is ease of development. No need to chop things up to fit into split memory pools when you're hitting limits.

What is easier: 2GB total or 7GB via 6+1 arrangement? All things even, yeah, 512MB was easier versus 2x256MB but we won't be seeing 6GB of GPU memory in a console in 2012. I don't think going for 1GB or stretching to 2GB for the sake of a unified pool necessarily justifies the expense and tradeoffs.
 
How clear is the roadmap for DDR and GDDR until 2012?

I know that DDR4 is in the works and it will be available in 2012 but my guess is that it may be too expensive for consoles at launch. For graphics memory I think there is GDDR5+ before GDDR6 which may bring lower power consumption at slightly higher frequencies. They should be available in 2011 and become ubiquitous in a short time since the transition from GDDR5 will be easier.

If we assume that consoles will adapt split memory pools this time, which seems reasonable to me, my guess would be DDR3 for system memory and GDDR5+ for the GPU access. Let's say 8 chips for each pools which might give 2-4 GB for main memory and 2 GB for video memory, assuming 2Gbit chips will be reasonably cheap in 2012.
 
So lets discuss: pros and cons of a split memory console on a console. e.g.

1GB of fast GPU memory
3GB (or 6GB) of slower memory for the CPU

Maybe not ideal, but given budgets going the "PC" route may allow a lot of cheap memory, provide very fast GPU memory bandwidth, and with a huge CPU-pool a lot of graphic assets can be in the general memory pool for quick transfer to the GPU.

The cheapest desktop with 2GB of ram on Newegg is $310 and the cheapest with 4GB is $390. So even if you take $100 off for Windows and various profit margins it doesn't seem to leave much money to add another GPU chip + associated fast ram. Furthermore I don't believe that the price per GB or GB/S difference between GDDR and DDR is significant enough to bother with using a seperate larger pool of the latter to go along with the former.

In addition you're looking at extra costs for eventually combining the two chips as you'd have two distinct memory architectures as well as extra on chip costs for another memory interface as well as extra board complexity and cost.

Where I see an extra pool of ram coming in I believe it would make sense to use Flash rather than DRAM as it gives a cost saving for a cheaper non HDD SKU and/or allows a console maker to go without a HDD entirely as the flash can perform two roles in acting as a slower pool of low latency ram to make up for the disadvantages of optical distribution and saves money by taking a fixed cost mechanical component out of the picture.
 
What about a dual Llano solution?

Llano has 2 * 64 bit controller. Each chip would have a bandwidth of 112 Gb/s to 4 GB GDDR5 UMA 1750Mhz.
I believe we will have also 64 to 128 mb of edram on a daugher die, connect to each chip with a 128gb/s connection. On the daughter die there will be also 16 to 32 Rops, therefore a total internal bandwidth of 2-4 Terabyte/s.
If the entire chip is running a 2 GHz, you would have 480sp * 2 flops * 2000Mhz *2 chips = almost 4 Teraflops, and 8 x64 cores.

At 32nm SOI, Llano should be 200 mm^2, and being a mobile solution, it will not be too much power hungry, and it will not generate too much heat. Mass production it's suppose to start at the beginning of 2011, leaving MS lots of room to put some millions of those chips aside.
Add 16 Gb to 64 gb of fast flash, and you have a entry level SKU at about 299$ (with 100$ of loss on each console). The engineering cost would be small, compared to develop a solution from ground up, and with lower m.p, they could put on the same die both chips.
A similar console could handle easily 3D (each GPU rendering one viewpoint) and 1080p@30fps@8xAA.
 
The bloat of x86 is not a good idea in a console, all those transistors used up for decoding/emulating thousands of instructions used in some business software written in 1991. Consoles have no need to be x86 compatible.
 
Not really.
If AMD/Intel can design a consumer level CPU at a given transistor/power and thermal count which also is optimised for extremely fast integer and FPU workloads... and at the same given transistor/power and thermal count for a non-x86 does not approach the level of these processors then..... well it is a no brainer to go with x86 factoring in the final part of the equation which is cost.

In other words, what other CPU in the same target market as x86 exist now to challenge AMD and Intel CPU's. The simple answer is none.

Possible approaches are

1. IBM modifying its POWER architecture for consoles (it did it this generation and I may be wrong but the CPU's are not that powerful per clock compared to the previous gen CPU's from Intel and AMD - Pentium III and the original Athlon).

2. ARM being asked to design a CPU that is power efficient given a thermal and transistor envelop for a mainstream desktop CPU manufactured at 32nm or equivalent.

3. Someone else decides to have a go... Fujitsu, Hitachi, Toshiba, lets go crazy, why not Elbrus or Oracle or another x86 stalwart, VIA.

4. NVIDIA designs its own CPU with SLI technology built-in.

;)
 
Interesting possibilities, Tahir. I wonder about IHV R&D resources though. Surely if one of those companies (options 2-4) designed a console CPU, it couldn't be a "one off" custom design; rather, it would have to be something they could leverage in their other markets. It is debatable whether even AMD has the resources to spare to design a custom console CPU.
 
From AMD and even Intel one would expect an off the shelf part. At the moment you have many different CPU's with different levels of cores, cache (L2 and L3) and thermal characteristics.

IBM has modified its own POWER architecture to create the PPE and Xenon as we all know. I don't want to attract the hardcore SEGA fans but Hitachi and SEGA had a good relationship back in the post Genesis/Megadrive days culminating in the SH-4 and who can forget the golden goodness of the dual core Sh-2 in the SEGA Saturn.

Moving forward to 2011/12 it would be interesting to see AMD's Fusion concept in a console with a discrete graphics card. The Fusion graphics core could be used for GPGPU activities rather than rendering, e.g. Physics calculations etc.

Any thoughts?
 
Yeah, of all the (non-IBM) possibilites, Fusion seems the most likely. I feel that moving more general compute work to the gpu is where the future is headed. Some of those gpgpu benchmarks are impressive, especially when compared to cpus. Seems like you can get a lot of bang for your buck there.
 
From a previous post, here's a single Nehalem core:
nehalemcore.jpg
The x86 ISA has bloated over the years with both Intel and AMD adding new instructions, and now it's a mess that takes up 10-15% die area per core for "decode and microcode". Also, going with an x86 Intel CPU didn't work out so great with the original Xbox in terms of costs. I think both MS and Sony will have to use PPC architecture for backwards compatibility anyway. General purpose integer performance doesn't seem to be too crucial for gaming, since both consoles seem to be doing fine with their weak cpus. GPU + floating point seems to be the most important, and it looks like a single powerful GPGPU will be able to handle both.
 
I have a quick question for the Xbox next speculation part of this thread. If Microsoft decided to support a completely new architecture within the Windows environment would a semiconductor company then offer them that same architecture to use for free or very cheaply within the Xbox 360? It seems the only thing keeping X86 exclusivity within the Windows environment is Microsoft themselves.
 
I have a quick question for the Xbox next speculation part of this thread. If Microsoft decided to support a completely new architecture within the Windows environment would a semiconductor company then offer them that same architecture to use for free or very cheaply within the Xbox 360? It seems the only thing keeping X86 exclusivity within the Windows environment is Microsoft themselves.

The xbox360 is already running a Windows 2000-derived kernel, right? Besides NT used to run on many archs back in the day. It just didn't make sense financially.

I doubt they would move from PPC to something else, they have a lot work to do on the reliability areas, though.
 
Status
Not open for further replies.
Back
Top