Predict: The Next Generation Console Tech

Status
Not open for further replies.
I'd estimate 500m trans at the most for xb720cpu. Roughly half the size of an i7.

That depends as memory is denser than logic and a console chip is going to have less memory than Intel packs on their chips. So an i7 design with the same transistor count as a console CPU is going to have a smaller footprint because the i7 is going to have more memory... and Intel does a lot of fine tuning versus automation of layout.

Anyhow, a better metric would be area (mm^2) and TDP.

Anyhow, AlStrong, correct me if I am wrong but was Xbox GPU about 160mm^2 on 90nm and 110mm^2 on 65nm? Daughter die 70mm^2.

If so, going by area budgets (TDP can be scaled by frequency) MS is looking at about 230mm^2 (Xenos) to just shy of 260mm^2 *RSX) for the GPU if the same footprint is aspired to.

Now putting that into the context of ATI GPUs over the last couple years:

http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

Code:
Model    mm^2    trans    proc    gflops

~2010
6970    389    2640    40nm    2703
6870    255    1700    40nm    2016
6770    170    1400    40nm    1360

~2009
5870    334    2154    40nm    2720
5770    170    1040    40nm    1360

~2009
4890    282    959    55nm    1360
4770    137    826    40nm    960

~2008
3870    192    666    55nm    496
3650    132    378    55nm    174

~2007
2900    420    700    80nm    475
2600    153    390    65nm    192

I know someone was trying to call it "enthusiest" but on 28nm in 2013 a 2x jump over the logic of a 6850 (6870 with disabled units) is *right in the ballpark* of the previous gen console GPU's footprints. Yes, there would be redundancy (both block level and entire blocks disabled, as is *already the case* in console GPUs) so you are looking more at the 6850 functional units and you would also be looking at reduced frequency due to binning.

But even with those considered by going from 40nm to 28nm and the mentioned redundancy and reduction in frequency a console GPU with ~ 250mm^2 is going to be a lot faster than the Radeon HD6870 is today.

But that is comparing a 2010 GPU with a 2013 console GPU--the console GPU should be faster as it has 3 years of additional process technology behind it.

I don't know if they will target the same footprint but if they do hit about 250mm^2 nearly 3B transistors and 3TFLOPs is not exactly "enthusiest" but is something they could squeak out with significantly reduced frequencies over the PC parts + redundancy (a fictional PC part at that size is going to be nearly 3.5B transistors and well over 4TFLOPs and something like a follow up to the 389mm^2 6970 is going to weigh in at over 5B and 5GFLOPs, roughly). Now that, especially in cross fire, is enthusiast ;)


As for all of those (also in Redmond?) who think the Xbox crowd is interested in the Xbox 3 being a Wii-like approach, what are the sales pitch? Why would I want this new console over just keeping my current box? What is going to get the enthusiasts excited about spending $300 for a Kinect-Box with games with marginally improved features and graphics?

Something has to sell the new thing. Graphics is one of the "obvious" selling points to get $300 out of someone's pocket. The Wii was novel with the Wiimote. Call me confused but I would rather keep my Xbox 360 than plunk down on a "services box" that doesn't offer enough under the hood to provide new experiences over the old box. There are a LOT of quality games I have never played on the Xbox so the new box needs to offer something compelling to invest in it.




Links with lots of numbers on sizes...

http://www.pcreview.co.uk/forums/jasper-xbox-360-150w-65nm-gpu-finally-arrive-stores-t3682974.html
http://www.tgdaily.com/hardware-fea...0-now-in-production-xbox-‘540’-coming-in-2009
http://forum.beyond3d.com/showthread.php?p=1600128
 
I expect MS to use 28nm shrink for performance and go 2xXenon with OoOE cores. That's 6 cores and 12 OoO threads with die size and power between a 65nm and 45nm Xenon.

If not for the ARM rumors I would have said I think this is the "logical" step as it allows all their software to be BC. Getting a 28nm chip 2x the cores/threads (6/12 vs. 3/6), better per-core performance with OoOe and a lot of clean up of Xenon stalls/gotchas, more cache, and with a smaller overall footprint than 90nm Xenon this would seem to be a "safe" processor and then shift your 'budgets' over to the GPU side with the intention of getting developers to use the GPU more for stuff (GPGPU).

I think that would be the only competitive solution against Sony releasing a real console--if Sony aims for a similar CPU and GPU footprint and goes with a new Cell design they may end up with 32+ SPEs with a 28nm design (without breaking the bank). For real world, out of the box, performance MS's saving grace would have to be the GPU.

If they are looking for ARM processors to go up against that... thing... if it ever exits with the hopes of getting early adopters excited, well, who is marketing this thing? I want some of the kool-aid!
 
Indeed.

I don't think anyone would argue otherwise.

However, we are talking about 32nm vs 28nm.

By the available published data (mostly from the past few IEDM), Intel 32nm has higher drive currents (so it's faster), lower leakage, and tighter logic than TSMC 28nm. TSMC wins only with smaller SRAM cells. TSMC 28nm really isn't a node up from, or even equivalent with Intel 32nm in any real way, and no-one in the know thinks it is.
 
I don't know if they will target the same footprint but if they do hit about 250mm^2 nearly 3B transistors and 3TFLOPs is not exactly "enthusiest" but is something they could squeak out ....

...Getting a 28nm chip 2x the cores/threads (6/12 vs. 3/6), better per-core performance with OoOe and a lot of clean up of Xenon stalls/gotchas, more cache, and with a smaller overall footprint than 90nm Xenon this would seem to be a "safe" processor and then shift your 'budgets' over to the GPU side with the intention of getting developers to use the GPU more for stuff (GPGPU).



...I think that would be the only competitive solution against Sony releasing a real console...

Indeed!

That's right about in line with where I was projecting, based on 28nm:

3 core xcpu (165m) => 9 core xcpu (495m) - or an upgraded 6 core PPE with OoOe and larger cache along with an ARM core (13m trans)

This leaves a hefty 2.8b trans available for xgpu which could accommodate 3x AMD ~6770 (1040m) ~3 teraflops or 4x AMD ~6670 (716m) ~2.8 teraflops.


I agree also on the notion of MS needing to go for the gusto hardware-wise.

With the advent of OnLive streaming gaming becoming viable at some point in the future, I'm sure most at Sony & MS realize the days of the fixed console are numbered. This MAY be the last generation of consoles.

Getting it right for xb720 is imperative to having success in the longterm.
 
Last edited by a moderator:
A good discussion would be VLIW vs GCN.

As I've said before, I don't believe there'll be any significant difference between Sony's or MS's gpu.

I definitely believe Xbox3 will use GCN and I believe that neither PS4 nor Xbox3 will exceed 1300 ALUs (probably not even 1200) with an AMD GPU or 400 with Nvidia if Sony goes this route.
 
Yea the 50W is too high after thinking about it, but I don't see 30W as being unreasonable for "other". The high clocked gddr5 itself will be in the 10-15W range.
How much power does Kinect use? I thought it had some kind of extra processing done outside the xbox itself but I could have misunderstood it.
 
Adding board power figures...
Code:
Model  mm^2    trans   proc   gflops   Power

~2010
6970    389    2640    40nm    2703    250W
6870    255    1700    40nm    2016    150W
6770    170    1400    40nm    1360    110W

~2009
5870    334    2154    40nm    2720    190W
5770    170    1040    40nm    1360    110W

~2009
4890    282     959    55nm    1360    190W
4770    137     826    40nm     960     90W

~2008
3870    192     666    55nm     496    110W
3650    132     378    55nm     174     75W

~2007
2900    420     700    80nm     475    215W
2600    153     390    65nm     192     50W
Not a totally clear picture because the architectures themselves vary wildly in efficiency, but very generally, acceptable power is trending upwards in PC performance GPUs, because dedicated PC gamers are the kind of folk that really doesn't care anymore.

The point you made about lowering clocks to curb power and maximize yields is interesting. That will have to compete against a smaller chip with higher clocks, which may or may not be cheaper. I'm not saying that wins. Both are valid ideas and might come out cheaper/less risky overall case-by-case,

You could potentially launch big and slow, and after a process shrink or two, start dropping in a halved chip that runs at twice the clocks. Just like you can replace memory chips with 2x faster ones and reduce bus width, while maintaining compatibility. So much for theory.
 
As an aside, I remember when Xenos was first released was it not mentioned that the GPU itself (not memory) was 25W?

I remember a figure of either 25 or 30W, but I don't know whether that included the daughter die. And I can't find a reference to that anywhere now, so maybe I've spent the last few years with false memory. :???:
 
How much power does Kinect use? I thought it had some kind of extra processing done outside the xbox itself but I could have misunderstood it.

AFAIK it was supposed to have some awesome DSP but it didn't get into the final device and 360 is doing most of the work. I think its load depends on what you do and how precise you want it to be and goes up to 1 HW thread or so (so it's 1/6 of 360 power).
 
Regarding the wattage remember that gcn will be a new architecture built around the "more performance per watt" idea too
 
AFAIK it was supposed to have some awesome DSP but it didn't get into the final device and 360 is doing most of the work. I think its load depends on what you do and how precise you want it to be and goes up to 1 HW thread or so (so it's 1/6 of 360 power).

Didn't it turn out that Kinect had everything in it that it was 'supposed' to have? There was lots of talk of downgrades when the resolution was described as 320 x 240, but it turned out the sensors were still 640 x 480 as planned it was just that Kinect had to scale the images to fit the data over a slice of the 360s USB bandwidth.
 
Kinect doesn't include the PrimeSense processor, but does include a number of other processors who's function has never been described (motion tracking the user might be one of them). Kinect uses some CPU and GPU. Remind yourselves here. ;)
 
Didn't it turn out that Kinect had everything in it that it was 'supposed' to have? There was lots of talk of downgrades when the resolution was described as 320 x 240, but it turned out the sensors were still 640 x 480 as planned it was just that Kinect had to scale the images to fit the data over a slice of the 360s USB bandwidth.

What Shifty said - it has different HW than initially reported and is (heavily) aided by the console CPU. Load depends. This is evident by the fact that Microsoft stated at various occasions that developers can add to the sensor's capacity above what's in the SDK (e.g. two people tracking can be extended to 3, 4, or whatever). That has to run on something. :)
 
From the outset I'd always thought that the bulk of the work would be done on the 360. The idea of running all the image processing, image recognition and skeleton mapping stuff on the Kinect device when vastly more power would always be in the 360 wouldn't make sense.

I'm not sure what PrimeSense processor the Kinect device was intended to have in it, but Kinect as it is does have at least one PrimeSense processor in it:

http://www.eetimes.com/electronics-news/4210649/Kinect-s-BOM-roughly--56--teardown-finds-

And it Google even found this handy part-photo!

https://chipworks.secure.force.com/...avigationStr=CatalogSearchInc&searchText=1080

As it is the device needs its own PSU (or a custom connector on the slims) and it needs a fan to keep it cool. What exactly did MS take out of the device between E3 and release, and how big was it supposed to be?
 
borderline on the two threads there's the rumor that kinect2 will be so hight resolution that will be able to understand facial expressions, read lips, and track eyes
over of course more of the same
 
Intel 32nm is likely just as dense if not denser than TSMC 28nm. And Intel 32nm will certainly achieve better power efficiency figures than TSMC 28nm. Just like Intel 45nm vs TSMC 40nm.

By the available published data (mostly from the past few IEDM), Intel 32nm has higher drive currents (so it's faster), lower leakage, and tighter logic than TSMC 28nm. TSMC wins only with smaller SRAM cells. TSMC 28nm really isn't a node up from, or even equivalent with Intel 32nm in any real way, and no-one in the know thinks it is.

Thank you for confirming that I'm not crazy. :smile:

How likely is it that Intel would ever agree to manufacture console chips? I assume they make way more money on their own high margins stuff, but at some point they will be so far ahead that it may be worth it for MS/Sony to pay the Intel premium instead of using a vastly inferior TSMC or GloFo process, for the savings they could have on size and power consumption. Maybe? I mean Intel seems to be further along with its highly advanced 22nm process than TSMC is with 28nm.

Has Intel ever agreed to produce anything for anybody else?
 
Thank you for confirming that I'm not crazy. :smile:

How likely is it that Intel would ever agree to manufacture console chips? I assume they make way more money on their own high margins stuff, but at some point they will be so far ahead that it may be worth it for MS/Sony to pay the Intel premium instead of using a vastly inferior TSMC or GloFo process, for the savings they could have on size and power consumption. Maybe? I mean Intel seems to be further along with its highly advanced 22nm process than TSMC is with 28nm.

Has Intel ever agreed to produce anything for anybody else?

TSMC doesn't produce a cpu for a console or anything else besides mobile ARMs, and you'd need to use bulk. They'll be a non-player cpu-wise nextgen too.

Of course Intel would make a console cpu, you just need to pony up a mountain of cash..... and yes they made the xbox1 cpu.

As to their 22nm process, let's see them come out with actual product before we hand them awards on it.
 
TSMC doesn't produce a cpu for a console or anything else besides mobile ARMs, and you'd need to use bulk. They'll be a non-player cpu-wise nextgen too.

Of course Intel would make a console cpu, you just need to pony up a mountain of cash..... and yes they made the xbox1 cpu.

Right, I'd forgotten about that.

I think it's pretty certain that Intel's 22nm is a fair bit better than their already industry leading 32nm process.
 
Status
Not open for further replies.
Back
Top