Predict: The Next Generation Console Tech

Status
Not open for further replies.
Things like Digital Foundry (and the reality of actually playing games) continue to show just how little tech bullet points matter in terms of actually delivering the best gaming platform.

DF, also, shows just how little awards matter in terms of quality, too, right? ;) Those are more than "tech bullet points". Those are experiences (as well as audio quality, video quality, production quality, story quality, gameplay quality, etc.). That's what gaming is all about, right (experiences)? If not, why are you gaming?

Anyway, I believe both next gen consoles have a good chance of end up with similar capable GPUs, again. That would put the difference back on the CPUs and how much data you can feed them with.
 
I'm wondering what functionality will MS bring to their next gen console. I'm thinking the next Xbox will have, at least, the following features:

LPCM 7.1 audio
True S3D support
HDMI 1.3 or 1.4 support
Blu-ray Drive
Web browser integration

If those features don't matter this gen why would they matter next gen? They probably will have most of those features, but they aren't really going to be selling points.
 
I searched for hd 6850 power consumption figures, hardware.fr is pretty reliable in this regard, they get 107Watts under 3DMarks and 116 under Furmark. It's close to be acceptable for a console especially as it takes in account 2GB of GDDR5.
A slight down would make it workable. Finer lithography may allow to pack extra SIMD to make up for lower frequency. Still it's not completely straight forward statement the 5850 while clocked a bit lower than the 6850 packs more SIMD/tex units and consumes a lot more (comparison is valid same memory speed, same bus width, etc.).

So sadly it's tough to pack more than hd 6850 power within a console power budget (and we're speaking of an healthy power budget).

Hardware designers are facing quiet a challenge. Clearly a one chip solution is at the same time the most efficient solution and the cheaper. The problem is that it may not packed enough muscles to be a compliant replacement for our consoles (I believe that the replacement will last more than 5/6 years too).

Design an efficient two chips system is in fact non trivial especially if you want to provide a UMA space. There are plenty of trade off that taxes the system efficiency:
*You're likely to have the memory controller attached to the GPU, that implies that investment in the CPUs will be some how hindered. Actually it may not make sense to invest too much in the CPU as it could be quiet bandwidth constrain. In fact using a 128 bits bus even the GPU could prove bandwidth constrained.
*EDRAM is a solution to alleviate this constrain but it adds extra cost. You need more than 10MB, you may need a faster link than the one in the 360 (which provides 32GB/s) as you're likely to move more datas. Actually to make the most of this pool of EDRAM you may need extra logic as it makes sense for it to be accessible from the GPU. Moving data to the EDRAM chip, then from edram to the main through the GPU and then back to GPU sounds like pretty inefficient. Overall you would want the chip to be more complew than xenos daughter to make the most of what should most likely be an healthy piece of silicon.

Overall my understanding is to make an optimal two chips system is a tough job, you may end up with actually a three chips system , setting limitation constrains on the programming model (tiling, giving up on UMA, etc.).

I've wondered about the idea of using twice the same chips so two APUs. In regard to costs it's the same as a CPU+GPU set up. (x2 a bus from APU to RAM and a fast interconnect between the two APUs). There's a big but, that's a dual GPU setup and feeding two GPUs is still not trivial.

I've read the slide released about their upcoming gpus and it doesn't help on the matter. If I understand right the "graphic pipeline " is still handled by the command processor in a monolithic fashion ( no matter geometry set-up, tessellation rasterization are now distributed). So it would appear to programmers as two chips.
But there is another but whereas the graphic pipeline is still monolithic it seems that AMD broke it when it comes to compute operation. As I get it their next GPUs can be view as multiple Compute devices, there will be multiple ACE which stands for Asynchronous Compute devices :)
It's irrelevant for us if the ACE and the "Command processor/thingy in charge of the graphic pipeline" are implemented as a monolithic piece.
This is of great help when we consider using two chips, developers won't see 2 GPUs but for example a pool of 8 computes devices (say each GPU includes 4 of such devices). As AMD states those are asynchronous which means devs can conveniently spread the rendering tasks among those devices, they are like independent cores or SPUs.

Actually it get me thinking about the relevance of the graphic pipeline. I wonder if could be relevant assuming a closed bow to simply throw away the part "dedicated to graphic pipeline" altogether and move to a software. I guess it has to do with their size, if it amounts to a fraction of the chip... who cares but the cost should be evaluate also in regard to the implementation cost. In next AMD GPUs there are an export bus that move data from the CU to the "graphic pipeline". The overhead could be big than it looks and passing on it may allow a more streamlined design or really keep it minimal.
Honestly I'm drooling to learn more (and see how they perform) about those parts. I drool even more about a further step in the direction of a pure compute architecture. If it mean giving up on the rasterizer and tessellation units I'm ok with that. If a trade off. Rendering through computes units only may allow techniques that lower the (external) bandwidth requirements. Why not implement something like TBR intended for Larrabee?
Basically having two APU that delivers ~1TFLOPs of power is achievable within a reasonable power envelope so we could be looking @2TFLOPS of pretty usable compute power (+ whatever the CPU cores adds to this figure). That's as much as the optimal figure Intel aiming for with Larrabee and most likely the architecture will be (way) more efficient. The cost distributed on plenty of SIMD should be low enough.
I read concerns about tessellation for larrabee like / TBR kind of rendering. Honestly tessellation sounded great but I see no great use of it coming, Carmack might be right about it. Anyway taking in account the forecasts for the tech I believe that passing on it is not unreasonable.
 
If those features don't matter this gen why would they matter next gen? They probably will have most of those features, but they aren't really going to be selling points.

If that next-gen tech doesn't/didn't matter, what sense would it make to waste precious console budget money on them? It wouldn't make sense, therefore the inclusion into the next Xbox would be proof that it matters (obviously).
 
If that next-gen tech doesn't/didn't matter, what sense would it make to waste precious console budget money on them? It wouldn't make sense, therefore the inclusion into the next Xbox would be proof that it matters (obviously).

You misunderstand me. If it somehow had made sense for MS to release a "360+" 1 year later than the 360 and $200 more expensive and those were the additional features that resulted, it wouldn't have been an impressive product. At this point, after all this time, adding those features would be an even less impressive addition. They are throw-ins.
 
DF, also, shows just how little awards matter in terms of quality, too, right? ;) Those are more than "tech bullet points". Those are experiences (as well as audio quality, video quality, production quality, story quality, gameplay quality, etc.). That's what gaming is all about, right (experiences)? If not, why are you gaming?

I game for HDMI 1.3 and 1.4 support, just like everyone else. ;)

Technology is a means to an end. Tech bullet points can become an end unto themselves for console warriors though, normally after losing the popularity and/or Digital Foundry battles.

If people don't care about the "end" or if the end can be reached another way (without a particular checkbox feature) then the checkbox feature in and of itself doesn't matter! Things that don't matter now may matter in the future. Things that matter now may not matter later. Some things will never matter.

Anyway, I believe both next gen consoles have a good chance of end up with similar capable GPUs, again. That would put the difference back on the CPUs and how much data you can feed them with.

They could well end up with similar GPUs this time. They might even end up with similar CPUs if IBM have variations on a theme that they want to sell to everyone...
 
I wonder if the CPU in the next Xbox will be, significantly, more powerful than the Cell in the PS3. It should be interesting!

Impossible, no console CPU ever released will be more powerful than teh Cell.

If those features don't matter this gen why would they matter next gen? They probably will have most of those features, but they aren't really going to be selling points.

Those features only matter to people who prefer the ps3 for obvious reasons.
 
Impossible, no console CPU ever released will be more powerful than teh Cell.
Those features only matter to people who prefer the ps3 for obvious reasons.
Well that's still kind of true plenty of workload that (way) map better on CPUs than GPUs doesn't scale that much with the number of cores. Quiet of often passing 3/4 cores there are diminishing returns.

So it will could be tough for a quad core to top the Cell in throughput actually it may not be a "wanted feature".
 
Well that's still kind of true plenty of workload that (way) map better on CPUs than GPUs doesn't scale that much with the number of cores. Quiet of often passing 3/4 cores there are diminishing returns.

So it will could be tough for a quad core to top the Cell in throughput actually it may not be a "wanted feature".

I was just having fun with it since I think it's silly to even attempt any kind of comparisons before the consoles are even close to being released. For some reason it seems like everything has to be a pissing contest for some people.
 
I was just having fun with it since I think it's silly to even attempt any kind of comparisons before the consoles are even close to being released. For some reason it seems like everything has to be a pissing contest for some people.

Did my post offend you? Because that was not the purpose at all.
A quad cores served by 4wide simd units won't match the cell in raw throughput. You can be sure that fans will throw the facts at your face no matter how well the chip fill its purpose.
So yes t plenty are in pissing contest but that was not the purpose of my post. There was some humour and irony mixed in ;)
 
Last edited by a moderator:
Impossible, no console CPU ever released will be more powerful than teh Cell.
I wouldn't be surprised if on batch tasks small enough to fit in local memory and allow some low level optimization teh Cell will outperform CPUs in the next gen consoles, even if they have higher peak performance.

They could and should go with 8+ cores, in which case I would assume a comfortable lead even at lower utilization at which Cell rules supreme, but I can easily see them cheaping out and going with just a quad core.
 
Did my post offend you? Because that was not the purpose at all.
A quad cores served by 4wide simd units won't match the cell in raw throughput. You can be sure that fans will throw the facts at your face no matter how well the chip fill its purpose.
So yes t plenty are in pissing contest but that was not the purpose of my post. There was some humour and irony mixed in ;)

Huh? No I wasn't offended at all, I was just explaining that my original post was all in jest and not meant to be taken serious. Also I wasn't referring to you when I make the comment about pissing contest, it was just a general statement.

I wouldn't be surprised if on batch tasks small enough to fit in local memory and allow some low level optimization teh Cell will outperform CPUs in the next gen consoles, even if they have higher peak performance.

They could and should go with 8+ cores, in which case I would assume a comfortable lead even at lower utilization at which Cell rules supreme, but I can easily see them cheaping out and going with just a quad core.

I agree, as I said above to liolio, I was just joking around. Apologies for the confusion. :oops:
 
Can't say I've ever had optical drive issues on 360 (over four units, including friends'). *shrug* Maybe the full-disc installation played a role in that (for my own two units).

I "think" my dvd is failing yet again. That's 2 with failed drives and 2 returned within 30 days for a couple of reasons. Basically I have paid for 2, but gone through 4. It amazes me when I think about how much I have spent this gen. 2 400$ xboxes, 1 hdd upgrade 120$, the racing wheel 150$?, the wifi attachment 100$, 3 extra controllers at 40$ each ? and of course countless games at 60-70$ a pop. So what, 1290$ in hardware and the games? Damn that is a lot.

On topic: Thinking outloud.

What about a release date dictated by the maturity of a process? What is expected to be ready sometime in 2013? You don't release until the process is there and stable to provide your parts at a cost you are willing to pay?

This time out worked fairly well for MS. What if they go the exact same route. Console more developer friendly, if ultimately slightly less powerful in the long run. It seems to take a first party to really show any hard to achieve (read : expensive) capabilities and who does MS have left to take advantage of it? 343? The Halo games have never been about awesome, cutting edge graphics. What is left?
 
What about a release date dictated by the maturity of a process? What is expected to be ready sometime in 2013? You don't release until the process is there and stable to provide your parts at a cost you are willing to pay?

That's every generation for basically most every product, including consoles. The problem with this most recent generation is that IC's planned years in advance to be on x or y process suddenly got railroaded by the serious thermal density issues which began cropping up at the 90nm generation for all fabricators, and the ensuing lag between 90nm and 65nm. Cell was probably the most effected between its original roadmap expectations and its launch form... and I'm not talking four chips in one of Gigaflops or anything, but simply the size and thermals of the launch chip.

This generation will definitely target and take into consideration process maturity at a certain slice in time, as well as future yield/cost improvement expectations going forward across the life of the console... it's just that this isn't anything out of the norm to begin with. I do think that the industry is back 'on track' though in being able to predict these things with some accuracy.
 
That's every generation for basically most every product, including consoles. The problem with this most recent generation is that IC's planned years in advance to be on x or y process suddenly got railroaded by the serious thermal density issues which began cropping up at the 90nm generation for all fabricators, and the ensuing lag between 90nm and 65nm. Cell was probably the most effected between its original roadmap expectations and its launch form... and I'm not talking four chips in one of Gigaflops or anything, but simply the size and thermals of the launch chip.

This generation will definitely target and take into consideration process maturity at a certain slice in time, as well as future yield/cost improvement expectations going forward across the life of the console... it's just that this isn't anything out of the norm to begin with. I do think that the industry is back 'on track' though in being able to predict these things with some accuracy.

This was kind of my point. When it comes to predicting "2013" or any other year, will it not really be dictated by the process maturity itself. Since I am not up on this as well as others here (this is a hobby for me and not a profession) with the relative unpredicatability, wouldn't any launch date be predicated on the process maturity and not a set date? Even to the point of 6 months of delay.

For the sake of argument, the next major shrink comes in 2012, bugs worked out by 2013. (Again, for the sake of argument.) If the next shrink after that is not going to hit until 2015 and mature by 2016, doesn't that seal it to a certain extent? I cannot see either Sony or Ms waiting until the 2015 timeframe to launch a new console.

It seems that if the box is going to be around longer, and be more expensive at launch, decrease in cost less quickly, etc, etc, they will take this into account. Maybe they only plan on making around 5 million in the first 2 years or so. Since it has a longer life, and a longer life where it is going to be the main draw, that would affect your strategy. You could afford to charge a higher price at launch and build a console that is thus more expensive if that is your long term plan. The only other option I can see is to go back to the old 5 year cycle. If you do that, the leap between generations may not be sufficient to differentiate the products to consumers. If the process shrinks are going to take longer to come along, then you are going to be forced to take the longer view. To me that would equate with a choice that is higher at launch (and I mean higher than MS charged last time, but probably less than Sony) to cut down on potential $ losses from subsidizing them and makes and excellent argument for creating a more powerful, not less powerfull, console. In other words, no Wii route for MS and Sony. More likely that they go the high end route because they will be somewhat forced to.
 
I just read this summup from the AMD summit and my idea of an even more compute oriented GPU for the next box makes actually quiet some sense :)
 
Assuming a christmas 2013 launch, what would be the latest moment in time they can decide on final specs for major components like cpu/gpu?


Taking the example of G70/Geforce 7800 GTX ... gpu was taped out since December 2004, released in June 2005, RSX (based on 7900GT and 7800 GTX) agreement signed with Sony's Nvidia June 2003 ("working together at 18 months before" ...public announcement on December 7, 2004...sony wait G70 taped out? ), taped out in December 2005 sent developers final sdk in February 2006,so for next generation i believe sSny and MS need final specs at least 18 months before release.

(xbox360 specs since september 2003 "leaked" in many foruns/sites)
 
Last edited by a moderator:
Last edited by a moderator:
Status
Not open for further replies.
Back
Top