Predict: The Next Generation Console Tech

Status
Not open for further replies.
My console wouldn't have dedicated fixed function hardware for physics, AI, cryptography or specific graphics effects. A more programmable and considerably faster GPU would be the first requirement (it could also be used for parallel general purpose processing) and the second requirement would be a more up to date CPU (more threads/cores, OoO execution or wide SMT at least). Basically more parallel processing power and more serial processing power. If the console would need to always send data crypted over the net and extensively use crypted file system for all file reads/writes, a dedicated crypting hardware wouldn't be a bad choice, as long as it wouldn't bring additional latencies (moving data from one processing unit to other takes time). After seeing the massive improvements the recent AES instructions resulted in crypting speed on standard x86 CPUs, a few new instructions would likely be enough (and offer more flexibility in the future).

Imagine if Xbox 360 and PS3 designers would have preferred fixed function dedicated hardware for every functionality back in 2005. Then look at the AAA titles at launch, and compare them to the recent AAA titles. The difference is huge. Many new rendering techniques have been invented (deferred lighting being one of the most important). With fixed function hardware, nothing like that would have been possible. With the next generation consoles, I want to be able to experiment with voxel rendering for example. I want to be able to use my GPU efficiently to any kind of parallel calculation. It might be that 5 years from now there are plenty of games out that do not rasterize triangles at all. Same for physics simulation. Better ways to simulate physics and calculate collisions are invented all the time. General purpose hardware lets us experiment with new algorithms, and create new innovative games.
 
Yeah, cost is going to be high for the higher density modules, at least, even DIMMs using 1Gbit DDR3 chips are not that cheap on the market afaik (those 8-16GB kits). And that's just really slow DDR3 compared to the much higher clocked GDDR5 that everyone should want or expect on a modest memory bus.

Certainly, packing as many chips in the same space as SODIMMs is going to have heat concerns at the higher clocks. Plus, DIMMs have latency concerns compared to directly soldering the things right next to the console processors. DDR3 DIMMs just aren't directly comparable to the GDDR5 configurations you see on high end graphics cards at all.

So again, sure, pack in all that DDR3 onto DIMMs for consoles, but just don't expect any decent performance. Just look at how bandwidth constrained llano is. You'd need at least triple 64-bit channels at non-JEDEC spec speeds to even compete.

And just a reminder, the GDDR3 700MHz chips in Xbox 360 were pretty costly due to supply constraints and being at the top end of clock rates. It eventually went down, but it's not something to sneeze at so easily. Don't forget how long it took for them to switch to 1Gbit chips too - keep in mind higher density chips start out with slower clocks, and really, it only made sense to switch if it were cheaper.

Heck, we're only barely seeing 2Gbit GDDR5. Sure, supply can ramp up to meet demand of the console manufacturers on top of existing high end GPUs, but that is hardly comparable to the rest of the PC market demanding DDR3 manufacturing. In the same way, the cost to make XDR2 is hardly going to be comparable.


The difference between a 1GB 6950 and a 2GB version is about 20 bucks on newegg. I'm sure the real cost difference is a fraction of that, so perhaps a GB of GDDR5 costs a few dollars? $7?
 
The difference between a 1GB 6950 and a 2GB version is about 20 bucks on newegg. I'm sure the real cost difference is a fraction of that, so perhaps a GB of GDDR5 costs a few dollars? $7?

I'm not sure that's really a reliable method of cost analysis considering the wide range of prices for just the 2GB cards. The real BOM is going to be hidden from these prices which are set at different profit margins for competition (if that makes sense). The price varies @ newegg between $270 and $300, but even the 840/1280MHz product is cheaper than one of the 810/1250MHz ones, so do you necessarily conclude that faster is cheaper? :p

From what I understand of high end GPUs, they have pretty high margins, so they have some leeway into pricing - I don't think your example is a clear indicator. *shrug*

-----------

Anyways, even if 1GB is $7, then the magical 4GB for console would be $28 or just under 10% of a hypothetical retail cost of $299. hm....

I was just wondering about the supposed $1B figure that was given for the doubling of memory in 360 from 256MB to 512MB, which I presume is spread over a projected lifetime and sales of say... I dunno, 50 million consoles for example. $20 for just 256MB of the latest GDDR3! Must have cost something ridiculous at the start of the gen. GDDR3 700MHz should be pretty low cost by now.

A naive calculation for Cayman's 389mm^2 @ $5K per wafer with something like 0.05 defects per mm^2 puts it at something like $42 per chip, but maybe someone from AMD can chime in. >_> No idea what it costs to manufacture the thing though I can't see it closing the gap for the selling price.

hm....
 
The difference between a 1GB 6950 and a 2GB version is about 20 bucks on newegg. I'm sure the real cost difference is a fraction of that, so perhaps a GB of GDDR5 costs a few dollars? $7?

According to old BOMs of launch 360s, they paid $16.25 per 1Gbit chip ($65 for 512MB). PS3 was $12 per 512Mbit of XDR ($48 for 256MB, GDDR3 seemed to be factored into the GPU cost). I know costs can come down, but I have a tough time believing they would be down to the point where they could get 8-16 times the amount of memory today for the same cost.

I'm not sure that's really a reliable method of cost analysis considering the wide range of prices for just the 2GB cards. The real BOM is going to be hidden from these prices which are set at different profit margins for competition (if that makes sense). The price varies @ newegg between $270 and $300, but even the 840/1280MHz product is cheaper than one of the 810/1250MHz ones, so do you necessarily conclude that faster is cheaper? :p

From what I understand of high end GPUs, they have pretty high margins, so they have some leeway into pricing - I don't think your example is a clear indicator. *shrug*

-----------

Anyways, even if 1GB is $7, then the magical 4GB for console would be $28 or just under 10% of a hypothetical retail cost of $299. hm....

A naive calculation for Cayman's 389mm^2 @ $5K per wafer with something like 0.05 defects per mm^2 puts it at something like $42 per chip, but maybe someone from AMD can chime in. >_> No idea what it costs to manufacture the thing though I can't see it closing the gap for the selling price.

hm....

If I wait a little longer you might add some more to this. :LOL:

But yeah with the possibly high markup, we don't know what's being trimmed from the margin to justify that retail difference while doubling the memory.
 
Last edited by a moderator:
Sandy Bridge just got released. TSMC is on record for saying that their 20nm process will go into volume production sometimes in 2013, hopefully just in time for Sony's console. It'll be interesting to see what kind of performance leap that these 3D transistors in Sandy Bridge brings.
 
Last edited by a moderator:
Sandy Bridge just got released. TSMC is on record for saying that their 20nm process will go into volume production sometimes in 2013, hopefully just in time for Sony's console. It'll be interesting to see what kind of performance leap that these 3D transistors in Sandy Bridge brings.

TSMC was behind on 40nm, is more behind on 28nm, and most certainly will be even more behind on 20nm. It will have been nearly 3 years before we see 28nm commerical products from 40nm, and the gap can only get bigger. I wouldn't expect a viable 20nm product before 2014. Thats just me though.
 
I find it quite amusing that most of the talk here focuses on numbers and not on needs. The only "wants" on the last 10 pages...
Go back 50 pages and you'll find that approach has been discussed already. ;) It's not that people are obsessing mindlessly over numbers either. We're not designing the next-gen consoles here, but predicting their technology based on technological progress in terms of lithography and manufacturing and available products. There's no point thinking a whole new CPU or GPU design is going to be wanted if such a thing doesn't exist and the console company isn't going to make it!
 
Can stacked memory be a realistic solution to the 4GB target?

amd is playing with this concept, and some month ago they evaluated the option to produce themself the memory chips
 
New console to be based on the ARM architecture.



http://www.ign.com/articles/2011/11/07/report-xbox-720-will-be-smaller-cheaper-than-xbox-360#

This does make sense... A many-core chip based on OoOE, Cortex A15 cores.

There's already a 16-core Cortex A15 planned for 28 or 32nm, just in time for Xbox Loop.
cortex.jpg
There are no power consumption figures for the Cortex A15 just yet but these chips are meant to compete in the tablet, laptop, and low-power sever space.

But what about graphics? Perhaps a cluster of PowerVR6 cores as someone stated a while back?
Where this 16 A15 rumors is originated from? From what I read the CoreLink CCI-400 allow scaling up to 8 A15 cores which obviously doesn't seem to match the aforementioned claim.
 
DDR4 can be stacked, but we won't be getting decent speeds until 2014-2015.
Stacking alone is not enough, what is needed for significant performance increases is stacking + TSV + massive numbers of I/O ... a main stream DRAM standard like DDR4 won't offer those.
 
Al, how much RAM do you expect in the next gen consoles? I would guess 4GB, either DDR3 or GDDR5. Perhaps they could go with low power low clocked DDR3 and add a healthy amount of EDRAM to alleviate the biggest bandwidth constraints?

sebbbi, given a 150-175W TDP, $250-300 BOM and 28nm TSMC manufacturing process, what would your next gen console look like? Does my proposition make sense?
  • Small but powerful dual or tri core OoO CPU with good SMT capabilitieslike the i3-2100
  • GPU comparable to the GTX560 perhaps with a DX11.1 or DX12 feature set (perhaps the GPU & CPU can be combined into a SoC for really fast communication between the two?)
  • 4GB DDR3 with a good amount of embedded RAM that is read/write capable
 
If they are using slow and cheap ram, they could at least cram 2/3 times more of it... I dont belive this rumor [for now].

Not that this rumor has a shred of credibility (at least it's not aiming incredibly low with the specs though), but it seems to imply a VRAM/System RAM split setup. No way the DDR3 is fast enough for the GPU anyway.

So I dunno, is it 1 GB or 2GB of VRAM in the fake console? lol.

Perhaps they could go with low power low clocked DDR3 and add a healthy amount of EDRAM to alleviate the biggest bandwidth constraints?

I've been wondering, is that feasible?

From what I know, I think not.
 
I've been wondering, is that feasible?

Well, that's basically just the same design philosophy as the 360. :p There's certainly a lot of functionality they could add to make the edram more useful, which we've discussed on the forums here before over the years.

Slim edram dimensions imply they are still on 65nm eDRAM for 360, which for mass production for mid-2010, fits with the timeline given by TSMC's roadmap. 40nm LP was only made available partway through 2010, and I don't believe they would have had time to do any production with 40nm G to supply the market. But I digress.

eDRAM reduction has always lagged, so I wonder if they might consider making a chip that's again roughly the same size as the original 90nm 10Meg chip. With 40nm, that's theoretically >8x density.

----------

At any rate, this (edram & DDR3 cost) would then have to be put against going with (for example) no edram but the fastest GDDR5 on at least a 128-bit bus or the legendary 256-bit, of which the latter places inherent restrictions on the minimum die size (smallest 256-bit GPU was 190-200mm^2) and issues down the road for cost reduction for a hefty chunk of silicon (not to mention day 1 costs, considering Xenos was already under 185mm^2 at the start).

(big die size + GDDR5) vs (smaller die + edram + DDR3)

----------
Mind you, AF does cost some bandwidth. If they could get texturing with the edram, so much the better.
----------


hm....................................... food for thought as usual.

Another thing to keep in mind is power density or thermal dissipation with respect to using smaller process nodes. Not exactly free stuff to ignore if you're thinking about bigger chips compared to the ones used at say 90nm...
 
Status
Not open for further replies.
Back
Top