Predict: The Next Generation Console Tech

Status
Not open for further replies.
The general conversation was about CPUs. That was AMDs almost complete recent CPU history. AMDs fab is now Global Foundries, a foundry that TheChefO has been pimping heavily. Those example are the best and most appropriate ones available IMO. If you have any better or more appropriate examples then hey, I'd love to see them.

To round out AMDs entire recent lineup of CPU (just to make absolutely sure that under no circumstances can I be accused of cherry picking):

- Athlon 64 X4 (45nm) to Llano (32nm) showed a small improvement in perf/watt, no real increase in performance, and massive, massive yield issues that are unresolved over half a year later (after an initial delay of over half a year).

- Phenom 2 X6 (45nm) to Bulldozer (32 nm). OH. DEAR. :(

The jumps in performance across node transitions have been smaller than the increases that occurred during the lifetime of the node.

Oh yeah, WiiU CPU: 45nm. There's got to be a reason for this!

IBM good sir.
 
I'm sure you are aware of how mGPU setups work currently. They need twice the VRAM as a single GPU setup.
+ tons of algorithms will be rather complex/inefficient to implement if not impossible.
Sure PCs will perform better than next-gen consoles.

But those PCs won't cost $300 or $400. Probably not even $800.
Would 6870 (or 560TI 448)+2500k be about on-par with the supposed next-gen consoles? You can get that kind of power for less than 800 today, it'll be cheaper in a year.
 
Just a little point about Acerts assessment on the previous page. Barts actually is a VLIW4 part, not VLIW5. Not that it makes a lot of difference, though.
 
Expensive. Or just unnecessary for their purposes. It's not like waternoose was particularly large on 45nm, a tried and true process that's been out for a long while now. This emphasis on companies pushing the latest and greatest process node is getting tired.

Power 7 was also already designed for 45nm. They'd have to start from (near) scratch on 32nm if they want to do it right. Shit scaling is abound otherwise. 32nm is also done with double patterning... extra steps, longer production. What's the big problem with using 45nm?
 
Keep in mind this is enthusiast grade, a mid range with far lower power consumption offers probably 15-25% lower performance, certainly not a mind-boggling drop.

Midrange PC hardware (e.g. a GTX460 + i3-2100) is much more than 15-25% slower than high end PC hardware (e.g. a GTX580 and i7-2600). Many games will not reflect this because they are built with very low spec hardware in mind (consoles).

+ tons of algorithms will be rather complex/inefficient to implement if not impossible.
Would 6870 (or 560TI 448)+2500k be about on-par with the supposed next-gen consoles? You can get that kind of power for less than 800 today, it'll be cheaper in a year.

6870 would be a decent guess for the power of a next-gen console GPU. Perhaps a bit on the low side but I'm probably being overly optimistic here.

But no next-gen console CPU can match the i5-2500k. It is already too big on a process that is superior to TSMC 28nm, plus it's an Intel chip and Intel won't be in the next gen consoles.

Just a little point about Acerts assessment on the previous page. Barts actually is a VLIW4 part, not VLIW5. Not that it makes a lot of difference, though.

Erroneous! Rage against misinformation!
 
This emphasis on companies pushing the latest and greatest process node is getting tired.

If by this you mean, "the notion that nextgen consoles will be pushing the latest and greatest process node is getting tired", to that I would counter, since when haven't they?

02/2001 geforce3 150nm (first commercially available 150nm nvidia gpu)
11/2001 xbox1 150nm

10/2005 x1000 series 90nm (first commercially available 90nm ATI gpu)
11/2005 xb360 90nm


As we can see, using the latest process node isn't a novel new idea. It would be a first if they didn't.



Further, can they afford not to? There are plenty of other entertainment devices vying to steal time/money/attention away from games boxes. The only real advantages over i-devices are that consoles don't need to be portable and so they can put more in the box to enable a richer entertainment experience.

If they go weak in that regard, it won't be long until ipad-x is matching (or beating) the capabilities of the "economic choice" MS/Sony made by not using the technology available to them to leverage this rich gaming advantage...



So then continuing the trend above ...

03/2012 AMD HD7950 28nm (first commercially available 28nm AMD gpu)
11/2012 xb720 28nm
 
IBM good sir.

I think there may be a bit more to it. But the choice of process strongly implies that Nintendo is not taking any chances in terms of time to market hick-ups due to process yield problems. It also implies that they are not pushing any envelopes in terms of die size or high power draws.

Then again, if they are not doing big, power hungry dies, they also haven't got all that much to gain by targeting an unproven process. They may as well enjoy the safety and moderate cost of 45nm.

Personally, I'd simply use an updated Xenon design (more on-die memory, faster communication to GPU memory management) and be done with it. Decent performance, simplest possible porting, low power and smallish die size at 45nm, probably below 100mm2 unless they go berserk with on-die memory. Performance would be a significant but not game changing step up from the 360. They would have room to spend more money and effort on GPU and of course controller.
 
Expensive. Or just unnecessary for their purposes. It's not like waternoose was particularly large on 45nm, a tried and true process that's been out for a long while now. This emphasis on companies pushing the latest and greatest process node is getting tired.

Power 7 was also already designed for 45nm. They'd have to start from (near) scratch on 32nm if they want to do it right. Shit scaling is abound otherwise. 32nm is also done with double patterning... extra steps, longer production. What's the big problem with using 45nm?

Oh I don't have a problem with them using 45nm. I don't have a problem with Sony and MS not wanting to jump in next year on 28nm either.

A really solid design on a well established node makes perfect sense to me, and more than a very conservative design on a troubled new node. And even the latter makes a lot more sense than trying a "Bulldozer + GTX 480" kind of approach.
 
Past performance DOES NOT predict the future, at least not when there is a mountain of other variables that all have huge impact on the final decision.
http://xkcd.com/605/

The mountain of variables have been addressed by me ad nauseam.



I thought the point I made in regard to "pushing the latest and greatest process node" was pretty clear and valid. Now unless you have something which negates the point I laid out (ie AMD/Nvidia NOT being able to launch early next year with 28nm GPU's), I'm not sure what your point is.

By the way, how do you say 12 months?
 
If by this you mean, "the notion that nextgen consoles will be pushing the latest and greatest process node is getting tired", to that I would counter, since when haven't they?

02/2001 geforce3 150nm (first commercially available 150nm nvidia gpu)
11/2001 xbox1 150nm

10/2005 x1000 series 90nm (first commercially available 90nm ATI gpu)
11/2005 xb360 90nm


As we can see, using the latest process node isn't a novel new idea. It would be a first if they didn't.



Further, can they afford not to? There are plenty of other entertainment devices vying to steal time/money/attention away from games boxes. The only real advantages over i-devices are that consoles don't need to be portable and so they can put more in the box to enable a richer entertainment experience.

If they go weak in that regard, it won't be long until ipad-x is matching (or beating) the capabilities of the "economic choice" MS/Sony made by not using the technology available to them to leverage this rich gaming advantage...



So then continuing the trend above ...

03/2012 AMD HD7950 28nm (first commercially available 28nm AMD gpu)
11/2012 xb720 28nm

11/1998 Dreamcast 250nm
03/2000 PS2 250nm

Edit: GPU manufacturing issues caused DC launch shortages (Sega estimated ~300,000 lost launch period sales). Early systems had beefed up cooling (heatpipe, radiator) that was later removed.
 
Last edited by a moderator:
6870 would be a decent guess for the power of a next-gen console GPU. Perhaps a bit on the low side but I'm probably being overly optimistic here.

Kind of have to wonder if they'll do the same thing as last gen with respect to the # of ROPs. Eight was nothing fantastic in 2005/2006. 16 might be enough for 1080p assuming clocks aren't worse. There's still bandwidth concerns to contend with.

Devs will always "optimize" with lower res alpha anyway, and they can do things to hide the edge artefacting.

Or heaven forbid, drop to 720p with MSAA and upscale with a whatever-generation AVIVO. :p FWIW, I doubt there will be nearly as many people who notice sub-1080p than sub-720p this gen. Scaling has gotten a lot better since 2005.

Wonder what advancements they'll make to texture filtering too... Texture interpolators were moved to shaders in Cypress. Could they move filtering to shaders too or would it be too slow... I mean, these days we need to deal with shader aliasing (normal-specular/parallax/displacement...) instead of just plain textures anyway.

----

Anyways, what I'm getting at is that then they can shove a lot more ALUs in there with their fancy DX11+ memory caches. Shading is going to be more limiting in the long run, so I'm just trying to think of what sort of alterations they can do within the current die layouts.

*shrug*
 
11/1998 Dreamcast 250nm
03/2000 PS2 250nm

Edit: GPU manufacturing issues caused DC launch shortages (Sega estimated ~300,000 lost launch period sales). Early systems had beefed up cooling (heatpipe, radiator) that was long gone by the time of the Western launch.

ok ... have you seen the launch numbers for xb360 and ps3?

Neither sold millions in the first calendar year. Shortages are expected.

As for ps2 being on an ancient process node at launch, that helps explain their under-performance vs XB and GC.
 
ok ... have you seen the launch numbers for xb360 and ps3?

Neither sold millions in the first calendar year. Shortages are expected.

As for ps2 being on and ancient process node at launch, that helps explain their under-performance vs XB and GC.

"Shortages" isn't a binary factor. You can be a tiny bit short, or quite short, or supply starved. If you have several million systems available in your first year and you can sell them that is a good thing.

PS2 decimated the Dreamcast (that beat it to 250nm by over a year) and decimated the Xbox (that launched on a relatively new process). Having the right product at the right time worked out pretty well for Sony.

Wii wasn't first on a new node either.
 
"Shortages" isn't a binary factor. You can be a tiny bit short, or quite short, or supply starved. If you have several million systems available in your first year and you can sell them that is a good thing.

I'm not talking about the first full year, I'm talking about at and around launch (typically launching at the end of the year for holiday sales).

In the US (I don't have reliable numbers for other regions):
ps3 from launch to end of the year 688k
xb360 from launch to end of the year 607k

I wouldn't expect much more than this for a console launched in 2012. More would be great, but I'm certainly not expecting it and I wouldn't encourage MS/Sony to wait a year so they could stockpile consoles just to have a few million ready at launch so as to avoid shortages...

PS2 decimated the Dreamcast (that beat it to 250nm by over a year) and decimated the Xbox (that launched on a relatively new process). Having the right product at the right time worked out pretty well for Sony.

Indeed.

Right product at the right time, with the right marketing, and the right brand, and the right support, and the right price, and the right games, and the right dimensions, and the right aesthetic, and the right interface.

Ps2's victory over DC is hardly indicative of the concept that waiting and launching on an old process node is what won the market for ps2 ....

Wii wasn't first on a new node either.

seriously ... ?
 
Indeed! Nintendo and IBM have history. But IBM have designed the CPUs for MS and Sony and they've been fabbed at somewhere other than IBM. Perhaps its the on die edram? But in that case, why not IBM's 32 nm process?

Googling for IBM I found this about Global Foundries from last year:

http://www.xbitlabs.com/news/other/...lems_with_IBM_s_32nm_Fabrication_Process.html

Where is IBM's 32 nm stuff?

The Cell is manufactured by IBM in Fishkill.

Power7+ on 32nm is supposed to be out Q1/2012. I don't believe they're in any hurry with it though, they skipped the Power6+ all together. They're better off concentrating getting Power8 @ 22nm into production.

For whatever reason, GloFo is a bit of a mess right now (Or maybe its alot of CYA spin by AMD :shrug:). They need to get their act together though or someone like Samsung will swoop in and snatch up those contracts.

Sony or MS could certainly go 32nm with their cpu and ganer most of the benefits they'd get at 28nm but it would leave them with an undesirable shrink to 28nm.
 
Midrange PC hardware (e.g. a GTX460 + i3-2100) is much more than 15-25% slower than high end PC hardware (e.g. a GTX580 and i7-2600). Many games will not reflect this because they are built with very low spec hardware in mind (consoles).
I thought midrange includes nvidia 560ti whose performance is not that far? And amd/ati I've heard tends to have lower power consumption in general.

But no next-gen console CPU can match the i5-2500k. It is already too big on a process that is superior to TSMC 28nm, plus it's an Intel chip and Intel won't be in the next gen consoles.

I'm not entirely sure about that, it would depend on use. A 16-18 core cell would definitely pose a challenge at certain tasks. And elements of the cell can go upto 6Ghz in 45nm, so nextcell turbo-clocking a few elements if need be for single thread gain could probably be viable. Would an i5 core outake a 6Ghz spu?

Oh I don't have a problem with them using 45nm. I don't have a problem with Sony and MS not wanting to jump in next year on 28nm either.

When is the 3d transistor competing tech going to be implemented outside intel, 28nm or node after? and does finfet offer similar benefits to intel's 3d transistor?
If it does offer similar benefits, use of such would provide a boost.

It would also be interesting to know if ibm is anywhere close to bringing optical interconnect technology to market, and if process node would affect viability.
As for ps2 being on an ancient process node at launch, that helps explain their under-performance vs XB and GC.
Weren't they using their own fabs back then? Right now they're likely to use manufacturers using state of the art.
 
Status
Not open for further replies.
Back
Top