Predict: The Next Generation Console Tech

Status
Not open for further replies.
But is not what happened at full node shrinks ...
Half-node labels are more marketing labels these days with less meaning than they once had. A half node at or below 45nm isn't an optical shrink of the full node numerically above it. Either there is no full node above it at a given manufacturer, or they are more distantly related than they once were.

"Dumb" optical shrinks at larger geometries were much more feasible, but that isn't realistically expected anymore.

At one point, the cost benefit goes away and even cell phone makers will stop filling their latest node production lines with product.
That eventuality has already been raised, by pretty much everyone including the foundries. 28nm's lifespan is expected to be ahistorically long because of it.

TSMC knows this and will adjust accordingly. 450mm wafers are part of this solution.
450mm may be or will be, but currently is not. At the rate of other process costs and expenditures are rising with each node--excluding things that get more expensive with 450mm--it's more about keeping things closer to the current normal that many are unhappy about.

It's also not a solution if one possible endgame comes about: a process node far below 20nm with 450mm wafers would only need a tiny number of fabs to service a future market with current growth rates.
It's not a problem yet, or not for all parts of the IC industry, but if it comes to pass 450mm's savings would be offset by underutilization charges creeping into the overal contract prices or poorly scaling manufacturing volumes leading to a repeat of 28nm's shortage.

As to the point on higher cost msrp items being more flexible to absorb higher cost new nodes... AMD is currently selling a 28nm 123mm2 chip based card for $100.
This chip size is roughly the same as a projected xb360/ps3 SoC @ 28nm.
Both consoles retail for more than double this cost.
This is comparing a fully functional *projected* single-bin console chip against an existing standalone component that is part of a lineup that spans multiple price points.
There's a bunch of factors we'd have to control for before we can take away any lessons from that.
 
Last edited by a moderator:
I dont know how sony would offer (uncharted 3) 40+ GB of data game to download....:rolleyes:

I downloaded 30GB of game data yesterday. What's the big deal? People in places with third-world connections will still buy physical, but nearing the end of the next console cycle, most people living in dense cities will have better bandwidth and less latency from the nearest CDN server than they can get from the optical drive.
 
Maybe before with Kutaragi at the helm he placed more importance on shrinking chips vs Stringer ...
Unless they kicked out all the engineers with him, they should have the ability to determine cost savings at a given node and identify where there's a benefit. Accordingly, there wasn't this gen.

Full nodes cut the cost in half (in theory).
But aren't we seeing less than those sorts of savings?

If there were no gains to be had, I'm sure AMD/ATI/Nvidia would still be on 45nm ... (or mirroring Sony/MS and only release on full node drops)

But that isn't the case.
GPUs are monsters, with as many transistors as can be crammed onto them. Larger fabrication nodes means bigger, hotter chips - they just can't be made at larger nodes. So instead they are made at smaller nodes and at great cost, alleviated somewhat thanks to binning across performance levels.

The benefits of savings on a full node offset the engineering/redesign costs which I'm sure were/are more expensive than in the ps2 days due to the complexity of the chipset.
Yes, but not by as large a margin as they used to, as I uderstand it (I can't find a straightforward article or report on diminshing returns, but that's the only description I hear applied to chip manufacture). So where a node shrink meant your processors cost half as much, and you could design $200 chips at launch that'd be $100 in two years and $50 in 4, next gen is looking more like a 75% cost reduction each node maybe, from $200 down to $150 down to $112.50. And that's with TMSC saying 14nm might not even happen. A $400 console can't get its BOM below $300 through die shrinks within the normal lifespan of a console. This gen it'll happen only because they've gotten so long in tooth.

Indeed. I'd be surprised if they didn't balk at a 25% increase in wafer costs for 28nm. The real kicker is what TSMC were looking to charge for 20nm and below.

However what TSMC would like to charge and what they will settle for are two very different things.
Presumably they are in a position to charge a price premium because people can't get their chips made elsewhere. The costs are rising not because some company is just pushing them up to make more money, but because the difficulties (and associated costs) in manufacturing are increasing, driving the price up (thanks to lack of competition because rivals can't manufacture at these nodes because it's harder).

If the chip is cheap enough already, it may not beneficial to invest in a redesign which doesn't offset the savings.
Exactly. If you run the risk of not being able to change the expensive chip to a cheap one through process shrinks, start with a cheaper chip in the first place.
 
From IGN:

To further signal the winding down of the current console generation, approximately 60% of respondents have no plans to release games for the Xbox 360, PlayStation 3, or Nintendo Wii after 2013. Of course, this means some 40% intend to keep at current-gen releases after next year. To that point, an anonymous developer told IGN, “I would not be surprised if something atypical cannibalizes the market, maybe even the Xbox 360 itself.”

From a hardware perspective, nearly 80% of respondents said Microsoft’s next console is the easiest to work with, and the overwhelming majority suspect it will be the sales leader over the next five years.

This presents an interesting opportunity for the next Xbox: It could come out of the gate with an established online framework in the form of Xbox Live, of course, with the potential to launch with a strong software lineup from eager and capable creators. After the self-destructive launch of the PlayStation Vita, Sony may not be able to convince developers that their games will sell. Having an impressive opening must be on Microsoft’s mind more than ever, and having a console that's easy to work with could help.

The ease of use compared to other consoles is assuredly attractive, too. By comparison, 63% of developers who spoke to IGN said the Wii U would be the most challenging platform to develop for. One creator went as far as saying, “we won’t be working on Wii U due to these complexities,” while another lamented the difficulty of moving innovative games unique to Wii U to other platforms. This poses the question: Will Nintendo once again need to rely primarily on first-party games to propel platform success? At any rate, the Wii U’s 2012 release window gives it a distinct advantage: time.

Link: http://www.ign.com/articles/2012/06/01/the-next-generation-according-to-game-developers
 
I downloaded 30GB of game data yesterday. What's the big deal? People in places with third-world connections will still buy physical, but nearing the end of the next console cycle, most people living in dense cities will have better bandwidth and less latency from the nearest CDN server than they can get from the optical drive.

there are countries with caps, though really except for Australia, NZ or similar situation there should be no need for them. sane ISPs increase their capacity and peering agreements, and save on the commercial and use tracking costs.

third world connections? I expect them to boom in the current decade :).
Africa, India and others should be able to deploy LTE-advanced networks, giving cheap and decent access with big coverage, and the least infrastructure costs.

transmitting 30GB of game data will be feasible.. unless you have caps, or they'll have to be avoided sometimes just because you're clogging the whole village's bandwith. (and maybe the next village's one, whose 4G cell is connected to the first village's 4G cell). a console would be used by many people, though.
 
Unless they kicked out all the engineers with him, they should have the ability to determine cost savings at a given node and identify where there's a benefit. Accordingly, there wasn't this gen.

I'm sure you mean something else with this statement.

As is, it sounds as if you're claiming that node shrinks did/do not offer a cost savings and were not utilized this gen when that is factually incorrect as you yourself stated.

90nm => 65nm => 45nm

The next logical step will be to 28nm which should be sometime late this year.

But aren't we seeing less than those sorts of savings?

Power wise and density wise, the savings are not perfectly 100% linear and they hardly were before now. But they are close enough to this point in recent node shrinks to carry on the same trajectory as before.

The only difference of late is TSMC looking to increase the price per wafer.

As I said before, WHEN cost savings present themselves, those savings can be passed on to the consumer. Even if they ended up never shrinking the die and staying on 28nm, cost savings will occur over time even on the same node.

I'm not proposing a huge loss leader gameplan. Sell at cost and design for a ~$400 BOM.

GPUs are monsters, with as many transistors as can be crammed onto them.

Yes, and one in particular is 123mm2 on 28nm which retails for ~$100. Mind you, that's with a circuit board, 1GB GDDR5, AMD profits and retail markup...

All of that on the expensive 28nm process ...


A $400 console can't get its BOM below $300 through die shrinks within the normal lifespan of a console.

Given a $250 28nm chip budget (+ $150 for misc for $400 total):

Reductions + 25% wafer increase per node
28nm $250 (+150 = $400 BOM) 2013
20nm $156 (+100 = $256 BOM) 2015
14nm $97 (+50 = $147 BOM) 2017

no wafer $ increases (before 28nm)
28nm $250 (+150 = $400 BOM) 2013
20nm $125 (+100 = $225 BOM) 2015
14nm $63 (+50 = $113 BOM) 2017

Compare those two and I'm not seeing an earth shattering difference.

I agree that moving forward, node shrinks will likely be more expensive and will possibly experience some delays, but that still does not equate to "Change everything! The sky is falling!"

Proceed with caution, be sure to not sell for crazy losses as before, but don't abandon the entire approach and risk killing the golden goose in the process.

Presumably they are in a position to charge a price premium because people can't get their chips made elsewhere. The costs are rising not because some company is just pushing them up to make more money, but because the difficulties (and associated costs) in manufacturing are increasing, driving the price up (thanks to lack of competition because rivals can't manufacture at these nodes because it's harder).

Charging customers more for higher costs is reasonable. In the example I provided above, even concurrent 25% wafer hikes are sustainable (if rather undesirable).

Exactly. If you run the risk of not being able to change the expensive chip to a cheap one through process shrinks, start with a cheaper chip in the first place.

The bit you were replying to was on diminishing returns for smaller chips.

On larger more expensive chips, the incentive to jump to the next node is greater than when dealing with smaller chips.

As I showed above, the formula still works and still provides a reasonable cost reduction overall.

25% wafer increases per node does not change the game so much as to turn MS and Sony into Wii followers. Despite what they would want the populace to believe...
 
GPUs are monsters, with as many transistors as can be crammed onto them. Larger fabrication nodes means bigger, hotter chips - they just can't be made at larger nodes. So instead they are made at smaller nodes and at great cost, alleviated somewhat thanks to binning across performance levels.

Another advanced node chip example in addition to the HD7750 I presented above:

LLano aka fusion - 1st gen.

This chip was produced on GF's 32nm process as soon as the production was ready ... last year.

The chip was 228mm2 and retailed in a range from $135 at the top, to ~$100 at the bottom to take advantage of binned parts and offer product at different pricepoints.

Note, this included margin for AMD, and for the retailer (oh and the heatsink).

I think this pretty well establishes a $125 ceiling for a ~225mm2 chip on a new(ish) node.
In other words, cry me a river that they can't afford to put a decent GPU in xb720/ps4.
 
Llano had a very tough time last year.
That's the sort of story that would scare people away from making the same choices, although many factors went into that.
At the same time, Llano's pricing during the good-die period of its wafer supply agreement is suspect. GF was eating a big chunk of the losses AMD should have taken if it were anyone but AMD.
 
•Epic talked with hardware manufacturers like Intel,AMD,Nvidia and qualcomm to workout a tech road map.
•the demo GI saw was running on a single Nvidia GeForce GTX 680
•New image based Lens Flare tech,occurs dynamically.
•UE4 capable of producing one million particles at once with no hit to game peformance.
•planning to release UE4 dev kit soon
•Epic wants "to empower game designers,to take charge of as much of the game production process as possible"
•"Next gen will be about alot of things besides just next-gen graphics performance"
•UE4 being designed for a decade that starts with the advent of next gen consoles

•entire creation process streamlined,Designers can edit and recompile source code without leaving the editor,take control of the game anytime they want.

•"new interface empowers designers to tweak basic programming without having to call over a programmer,Tech artists will be able to create complex assets and programmers can expose certain values for designers as needed to give them access to simple tweaks"

http://i.minus.com/ibgcMAbm3dSITW.gif
 
Last edited by a moderator:
Link please;)

•entire creation process streamlined,Designers can edit and recompile source code without leaving the editor,take control of the game anytime they want.

•"new interface empowers designers to tweak basic programming without having to call over a programmer,Tech artists will be able to create complex assets and programmers can expose certain values for designers as needed to give them access to simple tweaks"

Most important stuff IMO, the really important one.

But it would be usefull no matter the target HW, no need for next gen console we need this now (possible powerfull dev kits/PC for the editor though)


Nice gfx in that gif, still not very impressive IMO, but I like the particles and fire fx.

I will wait for work made by guys with better artists.
 
Last edited by a moderator:
Awsome GIF, looks pretty next generation to me. Good to hear it's running on a single 680 but I'd expect nothing more if it's also going to run on next gen consoles. It certainly doesn't surprise me that a 680 is capable of that. I expect it can do a lot more given time and optimisation.
 
Link please;)



Most important stuff IMO, the really important one.

But it would be usefull no matter the target HW, no need for next gen console we need this now (possible powerfull dev kits/PC for the editor though)


Nice gfx in that gif, still not very impressive IMO, but I like the particles and fire fx.

I will wait for work made by guys with better artists.

I don´t have a link, it is from a Game Informer article ;)
 
Llano had a very tough time last year.
That's the sort of story that would scare people away from making the same choices, although many factors went into that.
At the same time, Llano's pricing during the good-die period of its wafer supply agreement is suspect. GF was eating a big chunk of the losses AMD should have taken if it were anyone but AMD.

...and this was last year as soon as GF was ready to produce 32nm.

We're talking about 28nm ... late NEXT YEAR. This is the same process that was producing retail product at the tail end of 2011. So two years on from that point, I'm pretty sure they'll be able to come to grips with a reasonable price, productions volume, and yield.

...

And speaking of 28nm ... as I pointed out before the llano post, HD7750 retails for $110 ... with a heatsink, 1GB GDDR5, a board, retail packaging, shipping, retail margin, and margin for AMD.

So out of that budget, how much could the 123mm2 28nm chip possibly cost for TSMC to make?
$15 retail margin
$10 amd margin
$15 1GB GDDR5
$15 board
$2 retail packaging
$3 shipping
$5 assembly
$5 heatsink

$110 - $70 total estimate =
$40 chip cost

Sound about right?

Doubling this cost for a chip twice the size and we're looking at ~$80 for ~250mm2. Plugging in a bit extra cost for lower yields on a bigger chip, bump that figure 50% ... ~$120 ... give or take.

This matches up rather well as the upper limit figure from the llano example.

And again, these are figures used for if this chip would be produced today ... not in late 2013.
 
Last edited by a moderator:
Last edited by a moderator:
Interesting, and for once this is original work (lulz) being I'm outing it all by myself, I randomly noticed in this E3 prediction vid an apparant editor of Xbox World says Durango is a "monster" and "just incredible" according to what the devs have told them of the specs.

http://www.computerandvideogames.com/350515/video-cvg-predicts-microsofts-e3-durango-splinter-cell-6-ryse/

Around 3:20 of the vid on that page.

Throw it on the pile.

Cape Verde would be a monster versus Xenos, especially as you leave eDRAM out of the equation, implement a real tessellation scheme and make 1080p a real possibility too.
 
Cape Verde would be a monster versus Xenos, especially as you leave eDRAM out of the equation, implement a real tessellation scheme and make 1080p a real possibility too.

Agreed, magazines like Xbox World that are dedicated to a particular console and only that console tend to be very insular and only look at things in relation to that console. Thus BF3 on the Xbox will be reported as "best graphics ever", the 360 is still considered advanced and powerful and half the editors have no knowledge at all of PC GPU technology. Or if they do, they're under instructions to pretend like it doesn't exist. That's why I avoid those types of magazines like the plague.

Bottom like though is magazines like that would probably be amazed by 4x Xenos, a "monster" 1TFLOP GPU OMGZ!!
 
Status
Not open for further replies.
Back
Top