Predict: The Next Generation Console Tech

Status
Not open for further replies.
Why are we still so hellbent on maximum console TDP of ~200W?
What prevents console makers to go up to 250W?
Was there any significant improvements in cooling tech in the last five years, ever since last batch of 200W ps3's and x360's was on market?
What is the cost of a vapor chamber cooling for modern GPU's [and did ps3/x360 use vapour chambers]?

For the same reason a performance class GPU is sold at 200-250$, and a high-end GPU at 400-500$. I doubt OVERALL cost scale linearly with TDP and speed.

Console makers, at least let's say the smart ones, they don't want the fastest console out there. They want to have the best design, that can achieve the right tradeoff between cost and performance. If the competitor has 2x times the power budget, and is going to be twice as fast, his console is going to be 3-4 times as expensive as mine.
 
Could they have updated the Xbox 360 in such a way that it needs to go back to developers for testing and development purposes as a new dev kit? What if they're trying to turn it into a multi-use device? Would say for example adding another core or something silly in order to give the console better multitasking as say a DVR/Games console etc rolled into one mean they'd need to roll out new development kits?

Hm.. you mean for multitasking whilst playing games? I suppose the "easiest" is to design a new revision of the CPU/GPU adding a dedicated PPE core (+2 threads).

For simplification in manufacturing, they can shift production to this new design exclusively so all new units come with it. They'd probably want to phase out the old SKUs simultaneously. "All new units come with enhanced non-gaming functionality".

Aside from updating the XDKs for the latest CPU/GPU revisions, the biggest step forward for development was really the additional 512MB of GDDR3 once the 1Gbit chips became more readily available. In the interest of applications development though, MS would indeed need a new kit for anyone to take advantage of any additional core or other hardware improvement, but with the caveat that the 4th core etc be transparent to games and be used solely for OS and apps.

On the other hand, I'm not too clear if having such a feature as DVR would be particularly good for gaming performance where installed games are making heavy use of the hard drive.

It would be an interesting development none-the-less if such a hardware improvement was developed for exclusively running apps (not running games simultaneously) just to give devs extra oomph beyond the 5 threads they already get access to.

Who knows, perhaps at 28nm, having additional hardware might possibly be necessary due to padding reasons, so it might be a feasible path to take. What I mean specifically, is that if the current chip is shrunk it might be too small for the GDDR3 and eDRAM I/O etc. Adding a new core would let them pad the chip layout so it's just big enough (they'd have to add more L2 as well) whilst still being smaller than the 45nm design i.e. still cheaper to produce.
 
For the same reason a performance class GPU is sold at 200-250$, and a high-end GPU at 400-500$. I doubt OVERALL cost scale linearly with TDP and speed.

Console makers, at least let's say the smart ones, they don't want the fastest console out there. They want to have the best design, that can achieve the right tradeoff between cost and performance. If the competitor has 2x times the power budget, and is going to be twice as fast, his console is going to be 3-4 times as expensive as mine.

not if the Competitor use the right tech & launch a year or so after.

I know the Dreamcast & The N64 are in 2 different Generations but they were only 2 years apart & had about the same launch price.

so say the next Xbox launch at the end of 2012 & the PS4 launch in 2014 it could come close to 2X the power at around the same price. especially if Kinect 2 is a standard with the next Xbox like the Wii U Pad is a standard with the Wii U.

so if the PS4 is going for power while the other 2 is going for better controller interface it could actually give 2X the power without being 3 - 4 times the price.

hell the Xbox 360 wasn't 3 - 4 times the price of the Wii & it came out a year before & was way more powerful.
 
Hm.. you mean for multitasking whilst playing games? I suppose the "easiest" is to design a new revision of the CPU/GPU adding a dedicated PPE core (+2 threads).

For simplification in manufacturing, they can shift production to this new design exclusively so all new units come with it. They'd probably want to phase out the old SKUs simultaneously. "All new units come with enhanced non-gaming functionality".

Aside from updating the XDKs for the latest CPU/GPU revisions, the biggest step forward for development was really the additional 512MB of GDDR3 once the 1Gbit chips became more readily available. In the interest of applications development though, MS would indeed need a new kit for anyone to take advantage of any additional core or other hardware improvement, but with the caveat that the 4th core etc be transparent to games and be used solely for OS and apps.

Yeah, multi-tasking whilst playing games. I assume I'm on topic because this is future console technology compared to current generation consoles even if it doesn't exactly meet the criteria of 'next generation console'. They'd need to make sure that past, present and future games are compatible with the update, hence new development kits. I suppose it'd also make development a little easier with additional performance from the development kits.

In the context of the recent shipping numbers, perhaps they do intend to flood the market with the old model console in order to better facilitate the transition to a new style of console. Perhaps there is some intention to future proof the current design and keep it relevant to their needs whilst simultaneously developing a new line of consoles for the future.

On the other hand, I'm not too clear if having such a feature as DVR would be particularly good for gaming performance where installed games are making heavy use of the hard drive.

It would be an interesting development none-the-less if such a hardware improvement was developed for exclusively running apps (not running games simultaneously) just to give devs extra oomph beyond the 5 threads they already get access to.

Who knows, perhaps at 28nm, having additional hardware might possibly be necessary due to padding reasons, so it might be a feasible path to take. What I mean specifically, is that if the current chip is shrunk it might be too small for the GDDR3 and eDRAM I/O etc. Adding a new core would let them pad the chip layout so it's just big enough (they'd have to add more L2 as well) whilst still being smaller than the 45nm design i.e. still cheaper to produce.

We don't really know how much padding is already in the existing design unless someone knowledgeable is willing to make an educated guess about it. With further die shrinks, even if they incorporate the eD-RAM into the die may leave so much free due space they may as well add another core to the design as it would otherwise be wasted space.

I think the only two major questions are regarding bus contention with an extra core on the CPU given the fixed bandwidth there and the RAM bandwidth to main memory. (although we don't know how much the OS is allocated). I don't think HDD use will be a problem as the console has been designed for no or 20GB HDD initially, a 250GB or greater HDD would likely have spare throughput compared to the base design if it were to be used for something else.
 
Yes, we really can expect that the die budget will be geared more toward the GPU with the GPU taking more burden off the CPU.

Yes, we really can expect a ~500mm2 budget as it was last time.

At most we'll see 300mm2 for the GPU, IMO. Maybe a gimped HD7890 down clocked by 200-300Mhz.
 
Last edited by a moderator:
In the context of the recent shipping numbers, perhaps they do intend to flood the market with the old model console in order to better facilitate the transition to a new style of console. Perhaps there is some intention to future proof the current design and keep it relevant to their needs whilst simultaneously developing a new line of consoles for the future.

Seems like a neat possibility. The console has yet to hit the $99 bracket and the chips themselves ought to be fairly inexpensive to produce by now. There does seem to be a drive towards the multimedia set top box, so extending the current gen design with some tweaks to the CPU to enable more functional non-gaming applications might not be so out of this world. At the same time they could make use of newer process tech to ensure that the costs don't really go that high whilst reducing the TDP footprint and perhaps allow for an even smaller chassis -> completely new line of SKUs and easily identifiable as to what systems are capable of said new multimedia functions.

We don't really know how much padding is already in the existing design unless someone knowledgeable is willing to make an educated guess about it.
Well, the smallest 128-bit GDDR3 chip is in the region of 140mm^2, and we do know that the eDRAM I/O is going to push that minimum die space up even more just by nature of the amount of perimeter space is needed. The Slim's CGPU is currently ~165mm^2 on 45nm. With a proper shrink of the CMOS, you'd be looking at less than 100mm^2 at 28nm.

With further die shrinks, even if they incorporate the eD-RAM into the die may leave so much free due space they may as well add another core to the design as it would otherwise be wasted space.
I think for the purpose of manufacturing, keeping the eDRAM separate may possibly still be easier if not cheaper given the complexities of eDRAM transistor design.

But yes, adding another PPE core along with more L2 and whatever else they need is probably extremely cheap and would suit the purpose of padding the chip area on a future process so that it's big enough to accommodate the two memory buses. (Not unlike how AMD had to add two more SIMDs to rv770 otherwise it was just wasted space).

I think the only two major questions are regarding bus contention with an extra core on the CPU given the fixed bandwidth there and the RAM bandwidth to main memory. (although we don't know how much the OS is allocated).

I don't think HDD use will be a problem as the console has been designed for no or 20GB HDD initially, a 250GB or greater HDD would likely have spare throughput compared to the base design if it were to be used for something else.
At this stage of the game, such a new SKU would come with a large hard drive to further support the DVR functionality, but people will obviously still be using it to install their games. What do you do when there are people using the hard drive for their games then? There's going to be contention for reading/writing i.e. hitching during recording or really shitty game loading.

So sure, DVR functionality would be fine as long as you don't have a game using the HDD, but that's not what I'm talking about.
 
At most we'll see 300mm2 for the GPU, IMO. Maybe a gimped HD7890 down clocked by 200-300Mhz.

I don't know why you'd think that would be the absolute most... :???:

If we agree that the die budget will be roughly the same as last gen (450mm2-500mm2) and we agree that general purpose GPUs can offload more work than in the past (along with the fact that better Graphics sell easier than physics or AI), then it is perfectly reasonable to assume more of that 450-500mm2 budget will be used for the GPU than last gen.

176 XENON
80 EDRAM
182 XENOS
438

240 RSX
230 CELL
470

In looking at those numbers, it seems MS was far more aggressive in allocating die space for the GPU with 60% of the die being used for graphics, while Sony seems pretty balanced with a near 50% split. But one should keep in mind, a large portion of the Cell was often used to help the RSX get it's job done.

So 262mm2 last time for MS, 240mm2 for Sony.

As long as they leave enough die space to have a workable CPU (~50mm2 would net about the same CPU as they have now at 28nm), the rest can be allocated for the GPU.

That puts the max GPU size at around 450mm2.

But they will likely want some improvement on the CPU side and also they will likely want decent yields ... so I'd assume a max size of 400mm2.

Minimum of 300mm2.
 
I don't know why you'd think that would be the absolute most... :???:

If we agree that the die budget will be roughly the same as last gen (450mm2-500mm2)
Is that a safe assumption? There are two considerations in picking a transistor budget : cost at launch; and ability to die shrink. Where a next-gen console can afford to launch at whatever price, to hit mainstream it has to shrink well, and I don't know if conventional lithography is going to manage that. We might see other techniques like stacked packages, but overall these boxes have to be designed with one eye on the profitable $150 pricepoint which needs small chips. The console companies might be a little more cautious and keep budgets lower.
 
Is that a safe assumption? There are two considerations in picking a transistor budget : cost at launch; and ability to die shrink. Where a next-gen console can afford to launch at whatever price, to hit mainstream it has to shrink well, and I don't know if conventional lithography is going to manage that. We might see other techniques like stacked packages, but overall these boxes have to be designed with one eye on the profitable $150 pricepoint which needs small chips. The console companies might be a little more cautious and keep budgets lower.

Without disagreeing on the assumption part...I think the ultra low price point is not going to be in their plans next gen. They'll make sure that they won't have to go really low. The machines are going to be even more multipurpose and get more functions and content as time progresses keeping things fresh.

Current gen, some other consumer electronics and services have already maintained the price points fairly well, although PS3's high initial price skews this a bit. Going really low is not good business for anybody and it's going to be more about the ecosystems anyway.
 
Considering we're in the 7th year of this gen with no $150 pricepoint, I think they might have a slightly higher target than $150 for profitability.
 
Current gen, some other consumer electronics and services have already maintained the price points fairly well, although PS3's high initial price skews this a bit. Going really low is not good business for anybody and it's going to be more about the ecosystems anyway.
Can't agree. The purpose of these hardware boxes is to shift content. That used to be games, but now it's media as well. Sony and MS want their boxes in as many homes as possible, and a low price facilitates that. Plus cheaper hardware can mean more profit at higher prices too. The magical $150 and $99 pricepoints may have moves up to $200 and $150 given inflataion and expectations, but they still need to be reached. If I were convinced chips would shrink okay over the coming few nodes, then I'd say it's a non-issue, but to aid consumer adoption, MS and Sony must be considering what the price will be in 5+ years after launch.

What may change that is the Apple model, if you could release updated hardware that the hardcore upgrade to and pass their old boxes on. That works pretty well with slim models I think (except this gen where slims were bought to replace busted phats, rather than as an upgrade ;)). If added value can be patched into the high end a la iPad, the exosystem can be grown without worrying so much about entry level price. That's a very risky model for consoles though. Changing the hardware is dubious. Still, if next gen sees more abstraction of the hardware which is pretty likely IMO, that'll be tennable.
 
What may change that is the Apple model
Part of the Apple model is basically "if it says Apple on the box you can earn double the margin of your competitor". Also scaling a game is slightly more difficult than scaling a utility or casual shit, ensuring forward compatibility will require developers actually providing support for their games long after they are shipped ...

Which of the console manufacturers is ready to beat EA into submission on that front?
 
It makes perfect sense for Sony to go with Nvidia because of backward compatibility.

Not sure that's the sort of reasoning you want to attach to such a business arrangement; we've already got the thread on the importance of BC, and quite frankly, it's not even a good reason considering MS did just fine switching to ATI back in 2005. And just because it's the same company doesn't mean the new generation of hardware will be backward compatible. Ultimately, it's not the graphics tech features of RSX that are going to hinder the process, but the actual flaws and quirks that the devs have taken advantage of in their lower level access to the hardware. Replicating those in the new generation of hardware would seem like a waste of time and resources. Considering the number of generations between G7x and post-Fermi, I doubt backwards compatibility is going to be that much better if they went with another company just by way of the complete architectural shifts.

For the actual proprietary nV extensions, I don't see why Sony wouldn't just license them for emulation purposes if they do go with a different manufacturer. Since it would only be for BC purposes, they could make it a per-title sort of thing or draw up a contract that isn't shit. Ultimately, I don't think it'd be wise to hinder the next gen possibilities if they find someoneelse's design more suited towards their goals. Also consider that nV GPUs have traditionally been powersucking behemoths...

There are more important factors to take into consideration and unfortunately, we have no details surrounding IP licensing, manufacturing, what Sony is willing to pay, or what Sony's actual goals are for a next gen console, which will play a big role in the alternatives they are privy towards.
 
I'd say one factor is can Sony leverage the fact that nV risks business (potentially 70 million or more consoles) and consumer mindshare (imagine the flame wars) by being excluded from a console gen for any reason into a sweeter deal for Sony?

Would Sony be willing to use a hotter/less power efficient part if it has minor performance advantages? Especially if they get a sweetheart deal?
 
Hard to say. We don't know how nV will go about negotiations, and just how much worth it ends up being i.e. absolute royalty per chip + being paid to engineer something in light of competing with AMD's offer on the table. More realistically speaking, they would actually have to devote a significant amount of engineering for a more custom console part than what RSX turned out to be if you know what I mean. Who knows if nV has the resources considering their other hardware endeavours, but perhaps if Sony has got their minds already set on nV over AMD's offerings, then they ought to have a lot more lead-in time to do that sort of custom design compared to the rather lackluster time table for PS3 (unless they're ok with just taking an off-the-shelf architecture and only modifying the # of components).

I'm not too clear on exactly what differs between AMD and nV allocating engineering resources towards non-desktop designs. Maybe they can surprise us for once after working on two consoles.
 
I have a friend who was working at a chip vendor who was negotiating with Sony to get their USB chipset into the PS3 in early / mid 2006. I was shocked that Sony was still trying to lock down hardware componentry that late in the process.

Maybe they were negotiating for a post-launch revision of the hardware at that point, but it seemed that Sony was really cutting it close with that launch.

(Obviously you don't design a system around a USB chip, so that doesn't compare to Sony's timetable in locking down the GPU, but it still seemed sort of last minute to me.)
 
jonabbey said:
Maybe they were negotiating for a post-launch revision of the hardware at that point, but it seemed that Sony was really cutting it close with that launch.

This is almost absolutely the case... That, and also negotiating a 2nd or 3rd source for components. It's also not uncommon to source a part and that vendor fail to meet your needs leaving you high and dry forcing you to look elsewhere (I'm looking at you Atmel!)...
 
Seems like a neat possibility. The console has yet to hit the $99 bracket and the chips themselves ought to be fairly inexpensive to produce by now. There does seem to be a drive towards the multimedia set top box, so extending the current gen design with some tweaks to the CPU to enable more functional non-gaming applications might not be so out of this world. At the same time they could make use of newer process tech to ensure that the costs don't really go that high whilst reducing the TDP footprint and perhaps allow for an even smaller chassis -> completely new line of SKUs and easily identifiable as to what systems are capable of said new multimedia functions.

But yes, adding another PPE core along with more L2 and whatever else they need is probably extremely cheap and would suit the purpose of padding the chip area on a future process so that it's big enough to accommodate the two memory buses. (Not unlike how AMD had to add two more SIMDs to rv770 otherwise it was just wasted space).

Isn't the EDD the 'Apple competitor' of their divisions? There is probably a push somewhere to try to keep the average sale price stable and add value rather than trying to reduce the price too far too fast. I guess they'd have three major incentives for tweaking.

1. Support Wii U like functionality. Since they have a tablet of their own coming out next year this seems like a good possibility. I guess they might need new wireless hardware to support this?

2. Improve Kinect support, perhaps by making the system even more responsive. Better USB throughput would be nice.

3. Make the system more versatile for embedded applications such as a direct competitor for Google TV and Apple TV.

Well, the smallest 128-bit GDDR3 chip is in the region of 140mm^2, and we do know that the eDRAM I/O is going to push that minimum die space up even more just by nature of the amount of perimeter space is needed. The Slim's CGPU is currently ~165mm^2 on 45nm. With a proper shrink of the CMOS, you'd be looking at less than 100mm^2 at 28nm.

I think for the purpose of manufacturing, keeping the eDRAM separate may possibly still be easier if not cheaper given the complexities of eDRAM transistor design.

If they do manage to import the eDRAM to the CPU they'd be able to bring the RAM on package and that'd save a reasonable quantity of space in the system. I guess it really comes down to if they want an embedded box or a discrete box, I.E. giving it away in TVs and Cable boxes vs selling it separately. It'd likely be worth it with the former but not with the latter.

At this stage of the game, such a new SKU would come with a large hard drive to further support the DVR functionality, but people will obviously still be using it to install their games. What do you do when there are people using the hard drive for their games then? There's going to be contention for reading/writing i.e. hitching during recording or really shitty game loading.

So sure, DVR functionality would be fine as long as you don't have a game using the HDD, but that's not what I'm talking about.

They already allow background downloading don't they? So it wouldn't be a real stretch to allow for other processes in the background, right?

It is all very interesting really. On the one hand they can add value without adding significant cost on a per system basis but at a caveat that it'd take valuable software/hardware engineer time away from their next generation console and on the other hand they can go for strict cost reductions and simplification by keeping the system the same and outsourcing the engineering to other companies to do the chip modification and testing if they simply go 32nm SOI and embed the eDRAM. Im assuming here that implementing any form of extra features will take a considerable quantity of man hours although if the nextbox shares the same functionality then it won't all be wasted. Adding more features will have to be weighted I guess against the cost of testing and development, with eDRAM the only consideration is making the bus as fast/slow as it was when it was discrete and they've already achieved that when they did the CGPU design.
 
Status
Not open for further replies.
Back
Top