Predict: The Next Generation Console Tech

Status
Not open for further replies.
Seems to depend on the game a bit, Mobility 6970 seems to be winning it in some games, while losing in others. It's rated at 75W

Yeah, the the cpu´s seem also to be different in some benchmarks (485M coupled with faster Sandy Bridge), so I think its fair to say they perform similar, but the 6970M is rated between 75 and 100 Watt. So I think it is the better GPU in perf/watt.

The 6970M is a downclocked 6850, this makes it 1,7 Billion Transistors. The 485M on the other hand is a downclocked GTX 560 with 1,92 Billion Transistors. AMD has a big lead in perf/watt and perf/mm right now. I am excited to see if Nvidia can improve with Kepler in these regards as promised with their gpu roadmap.

On 28nm their should be chips in the same TDP range with the performance equivalent to the gtx 580. 28nm High-K Metal-Gate process is said to have a less leakage than 40G, so I hope power draw improves more than the usual 30% with the next node.
 
With Kepler being 2011 tech and Maxwell being 2013 tech, wouldn't it make sense for Sony to use a custom stripped-down Maxwell part in PS4?

I think it all depends on the process node available at the time and the maximum TDP that sony is aiming at for PS4. 28nm is allready late, don´t know if 22/20 nm will be available in a mature form for a holiday 2013 launch. If they go for 200 Watt total and a GPU centric design, they could spend 120 Watt for the GPU and 60 Watt for CPU and 20 for the rest.

At 28nm this would be more or less 580 GTX level performance. That is an order of magnitude over RSX and what one would expect from a generational leap. They could launch a PS4 with this perfomance holiday 2012. Even 2013 they could not launch with a more powerful console, because it would still be on 28nm I assume. I think we can expect 20nm q1 2014 the earliest from TSMC. :-/
 
Regarding specific GPU features, what are the odds that both Sony and MS will emphasize on geometry throughput via tessellation and parallel setup processing and to what degree?
Will be this an important key feature that developers would hook up, or the raw pixel fill-rate would dominate the GPU die budget, given the "1080p60" target and the Stereoscopic 3D craziness.
 
Compute shading power is going to be more important as the years go on i.e. prettier pixels. It'll at least delay the need to drop frame buffer resolution. I mean, we haven't even begun to deal with shader aliasing or universal HDR lighting in games...
 
What they _are_ concerned with is cost. And cooling systems are expensive. The Wii's power envelope had less to do with marketing, and more to do with lower per-unit costs.

Well, these things go together nicely.
Nintendo has been very successful in bringing non-gamers/casuals into gaming, and I can guarantee that having a small, discreet and quiet console has been an advantage in gaining acceptance in those groups.
 
What they _are_ concerned with is cost. And cooling systems are expensive. The Wii's power envelope had less to do with marketing, and more to do with lower per-unit costs.
Hell you don't have to tell me Nintendo main concern are costs :D Everyone knows the ridulous extremes Nintendo goes to save money, like recycling previous generation hardware.Thats obvious. You are not geting my post.

What im trying to say, low power consumption is not some sort of unbreakable rule for them. If they have a hardware throughput target to sustain their console strategy -in this case supposedly to guarantee better 3rd party support-, going by the storical precedent marked by Wii's ultra low TDP, is to some extent a waste of time. The reason Nintendo made a big fuss about Wii low operational power and size is to some what take attention of its also ultra low graphics throughput (in relation to competing products).

Also helping their cooling solution this time, will be a larger console size. That was a given to a lot of us even before the rumors started. This is the point i was trying to make in my previous post. Believe me, im not oblivious to Nintendo maniac compulsive obsession with profitabilty even when sometimes it ends coming back to bite them in their ass. ;)
Well, these things go together nicely.
Nintendo has been very successful in bringing non-gamers/casuals into gaming, and I can guarantee that having a small, discreet and quiet console has been an advantage in gaining acceptance in those groups.
What you say is valid. But there's certain size treshhold that's acceptable to the users if they are interested in the product. To put an example, if a person is atracted to Kinect they won't hold back because the 360 is bigger than the Wii. So to sum up there are certain factors in consoles purchase that take precedent to size or power consumption.
 
Last edited by a moderator:
Compute shading power is going to be more important as the years go on i.e. prettier pixels. It'll at least delay the need to drop frame buffer resolution. I mean, we haven't even begun to deal with shader aliasing or universal HDR lighting in games...

True enough, but I'd argue that as long as you still have glaring artifacts like polygon edge aliasing and even crudeness due to insufficient geometry information, maybe you should adress that first. And that's what we're dealing with on current gen, not only the Wii (which is obviously dreadful in that respect). As has been pointed out, the galloping power draw of the top end PC GPUs pretty much ensures that we have already seen what could be done in a console on 28nm. Having a resolution roof at 1080p helps, but I would also assume that 3D capability will be mandatory, which also requires at least 60fps solid (120 fps preferable obviously) in order to work reasonably well. Compared to today, both resolution and frame rate requirements have increased, so just how much more processing per pixel are we likely to afford, when the number of pixels pushed will need to increase by a factor of five, roughly? 90nm => 28nm gives you just under a factor of ten in gate budget to play with.
Some restraint in expectations is probably in order.
 
Mandatory 3D is about as silly and pointless as the minimum framebuffer resolution requirements...

True enough, but I'd argue that as long as you still have glaring artifacts like polygon edge aliasing and even crudeness due to insufficient geometry information, maybe you should adress that first.

As you well know, post-process edge filtering would require more compute shading power too... I'd argue shadow map solutions need a much bigger boost for image quality than polygon counts at this stage when you're talking bang for buck - i.e. z-fill rates & compute shading power for filtering.

Raw poly throughput is one thing, but it's ultimately useless if there's insufficient shading power to do anything nice with them.
 
Mandatory 3D is about as silly and pointless as the minimum framebuffer resolution requirements...
I'm inclined to agree about 3D, but I cannot imagine that Sony wouldn't want to leverage the PS4 to push their 3D devices in other business groups. It's inconceivable.
I simply don't agree about resolution, but that's in the domain of personal preference. Primarily gaming at 2560x1440 resolution makes resolution limits pretty damn glaring.

As you well know, post-process edge filtering would require more compute shading power too... I'd argue shadow map solutions need a much bigger boost for image quality than polygon counts at this stage when you're talking bang for buck - i.e. z-fill rates & compute shading power for filtering.
Well, opinions differ on that one.
But my main point was that increase in resources is ballpark predictable, and it is a finite resource.
 
I'm inclined to agree about 3D, but I cannot imagine that Sony wouldn't want to leverage the PS4 to push their 3D devices in other business groups. It's inconceivable.
I simply don't agree about resolution, but that's in the domain of personal preference. Primarily gaming at 2560x1440 resolution makes resolution limits pretty damn glaring.

Eh? That's a completely different argument than forcing developers to go with a set resolution or feature. In no way am I arguing that higher resolution doesn't have benefits. Of course it does, but forcing developers to go with some resolution to hit a marketing bulletpoint is just ignorant and ridiculous. As long as they have a decent scaler, they can always keep the "resolution output support" just as they've always done this generation. The mass market is going to be ignorant to slight resolution deficits.

At some point devs will want to use more advanced shaders or fillrate sucking effects, and that's obviously going to cost something on the fixed platform, so some mandatory rendering resolution serves no real point. It's been demonstrated across many games this generation and I see no reason why it should become enforced again. Of course, if you are instead wanting to focus more on geometry processing hardware instead of pixel-related hardware, that's going to make resolution requirements that much more limiting for developers.
 
Longer pipeline and less transistors in order to achieve much higher clocks.
It doesn't necessarily mean it's a poorer performer.

Longer pipeline generally DOES mean poorer performance. All other things being equal. And Xenon is not only longer, but also much narrower than the Athlon64 (per core).

I might be mistaken but I think Xenon was even longer than the P4. Add to that narrower, in order and clocked lower than the higher end models and it doesn't look great. Certainly not a case of assuming it must be better because it's 3 cores vs 2.
.[/QUOTE]
 
I was actually gaming on an a dual core Opteron 170 @2.5 gHz until January of this year. All I can say is if this is the case, no PC developer seems to know how to optimise for the Athlon X2.

Early on in single threaded stuff, sure it was a lot faster. Stuff like Capcom's Framework engine, that had all the PC sites wet over it's CPU scalability? No. Definitely no.

But you also have to consider the extra OS and API overhead that PC CPU's need to deal with. I think its the also the case with many games that functions dealt with on the GPU in the console are passed onto the CPU in the PC to ensure feature compatibility.

And finally there's the fact that developers won't be tagretting specific CPU architectures very much, especially older ones so modern code is never going to be utilising an AthlonX2 nearly as well as it will be utilising Xenon in the 360.
 
Longer pipeline generally DOES mean poorer performance. All other things being equal. And Xenon is not only longer, but also much narrower than the Athlon64 (per core).

I might be mistaken but I think Xenon was even longer than the P4. Add to that narrower, in order and clocked lower than the higher end models and it doesn't look great. Certainly not a case of assuming it must be better because it's 3 cores vs 2.
.

I said it's not necessarily a poorer performer because it has higher clocks, even though it has a longer pipeline.

Just for the stupidly extreme example, I'm pretty sure a Northwood @ 3GHz will outperform a Sandy Bridge @ 300MHz, even though it has 1/10th of the transistors.
 
Speaking of 28nm, 20nm, etc., I would expect both Microsoft and Sony would have to be conservative in their power / cost at launch due to the possibility that Moore's law simply won't be there to reduce the cost and power consumption later in the generation.

Wouldn't they?
 
Speaking of 28nm, 20nm, etc., I would expect both Microsoft and Sony would have to be conservative in their power / cost at launch due to the possibility that Moore's law simply won't be there to reduce the cost and power consumption later in the generation.

Wouldn't they?

By the time they ship anything they'll have a pretty good idea when the next shrink is coming.
 
I think it all depends on the process node available at the time and the maximum TDP that sony is aiming at for PS4. 28nm is allready late, don´t know if 22/20 nm will be available in a mature form for a holiday 2013 launch. If they go for 200 Watt total and a GPU centric design, they could spend 120 Watt for the GPU and 60 Watt for CPU and 20 for the rest.

At 28nm this would be more or less 580 GTX level performance. That is an order of magnitude over RSX and what one would expect from a generational leap. They could launch a PS4 with this perfomance holiday 2012. Even 2013 they could not launch with a more powerful console, because it would still be on 28nm I assume. I think we can expect 20nm q1 2014 the earliest from TSMC. :-/

20nm will be ready in the second half of 2013 according to AMD roadmap:

AMD_Fusion_28nm_20nm_14nm_6.jpg
 
I said it's not necessarily a poorer performer because it has higher clocks, even though it has a longer pipeline.

Just for the stupidly extreme example, I'm pretty sure a Northwood @ 3GHz will outperform a Sandy Bridge @ 300MHz, even though it has 1/10th of the transistors.

It doesn't necessarily suggest it's slower but it is a good indicator. Coupled of course with the the narrow execution resources and in order architecture, plus the by all accounts very weak branch prediction.

Xenon actionally has a longer pipeline than Northwood (23 vs 20 stages)even though it only runs at the same speed as the fastest Northwoods which were themselves far slower than similarly clocked A64's.

Clockspeed can make up for any architecture deficiency if it's extreme enough but there isn't a huge difference between AX2's and Xenon at the time of 360's launch and P4's at that point actually had more clock speed than Xenon while still offering less performance than the AX2's. So in effect, for Xenon to be faster than the slower clocked AX2 is would have to be faster clock for clock than the Pentium D. Or to be comparable to to todays tri-cores it could have to be far, far faster per core than the Pentium D.
 
20nm will be ready in the second half of 2013 according to AMD roadmap:

AMD_Fusion_28nm_20nm_14nm_6.jpg

Well, I know 20nm is scheduled to be ready late 2013, but according to such roadmaps we should had 28nm chips in Q4 2010. Going by recent history I am a little bit skeptical about them delivering on schedule. And even if everything goes according to plan, I don´t think sony could manage a fall launch in only 2 months.
I would expect a 20nm console in sufficient quantity 2014 the earliest. Anyway if they launch their console on 20nm, I would expect them to sell it with profit from the start or at least break-even, because after 14nm things will become uncertain.
 
Status
Not open for further replies.
Back
Top