...the profit potential from downloadable goods justifies their inclusion.
Profit potential from including some type of local storage is indeed there, but not necessarily on a HDD scale.
It would be wise to limit game sizes to <25gb.
...the profit potential from downloadable goods justifies their inclusion.
I dont know how sony would offer (uncharted 3) 40+ GB of data game to download....
Half-node labels are more marketing labels these days with less meaning than they once had. A half node at or below 45nm isn't an optical shrink of the full node numerically above it. Either there is no full node above it at a given manufacturer, or they are more distantly related than they once were.But is not what happened at full node shrinks ...
That eventuality has already been raised, by pretty much everyone including the foundries. 28nm's lifespan is expected to be ahistorically long because of it.At one point, the cost benefit goes away and even cell phone makers will stop filling their latest node production lines with product.
450mm may be or will be, but currently is not. At the rate of other process costs and expenditures are rising with each node--excluding things that get more expensive with 450mm--it's more about keeping things closer to the current normal that many are unhappy about.TSMC knows this and will adjust accordingly. 450mm wafers are part of this solution.
This is comparing a fully functional *projected* single-bin console chip against an existing standalone component that is part of a lineup that spans multiple price points.As to the point on higher cost msrp items being more flexible to absorb higher cost new nodes... AMD is currently selling a 28nm 123mm2 chip based card for $100.
This chip size is roughly the same as a projected xb360/ps3 SoC @ 28nm.
Both consoles retail for more than double this cost.
I dont know how sony would offer (uncharted 3) 40+ GB of data game to download....
Unless they kicked out all the engineers with him, they should have the ability to determine cost savings at a given node and identify where there's a benefit. Accordingly, there wasn't this gen.Maybe before with Kutaragi at the helm he placed more importance on shrinking chips vs Stringer ...
But aren't we seeing less than those sorts of savings?Full nodes cut the cost in half (in theory).
GPUs are monsters, with as many transistors as can be crammed onto them. Larger fabrication nodes means bigger, hotter chips - they just can't be made at larger nodes. So instead they are made at smaller nodes and at great cost, alleviated somewhat thanks to binning across performance levels.If there were no gains to be had, I'm sure AMD/ATI/Nvidia would still be on 45nm ... (or mirroring Sony/MS and only release on full node drops)
But that isn't the case.
Yes, but not by as large a margin as they used to, as I uderstand it (I can't find a straightforward article or report on diminshing returns, but that's the only description I hear applied to chip manufacture). So where a node shrink meant your processors cost half as much, and you could design $200 chips at launch that'd be $100 in two years and $50 in 4, next gen is looking more like a 75% cost reduction each node maybe, from $200 down to $150 down to $112.50. And that's with TMSC saying 14nm might not even happen. A $400 console can't get its BOM below $300 through die shrinks within the normal lifespan of a console. This gen it'll happen only because they've gotten so long in tooth.The benefits of savings on a full node offset the engineering/redesign costs which I'm sure were/are more expensive than in the ps2 days due to the complexity of the chipset.
Presumably they are in a position to charge a price premium because people can't get their chips made elsewhere. The costs are rising not because some company is just pushing them up to make more money, but because the difficulties (and associated costs) in manufacturing are increasing, driving the price up (thanks to lack of competition because rivals can't manufacture at these nodes because it's harder).Indeed. I'd be surprised if they didn't balk at a 25% increase in wafer costs for 28nm. The real kicker is what TSMC were looking to charge for 20nm and below.
However what TSMC would like to charge and what they will settle for are two very different things.
Exactly. If you run the risk of not being able to change the expensive chip to a cheap one through process shrinks, start with a cheaper chip in the first place.If the chip is cheap enough already, it may not beneficial to invest in a redesign which doesn't offset the savings.
To further signal the winding down of the current console generation, approximately 60% of respondents have no plans to release games for the Xbox 360, PlayStation 3, or Nintendo Wii after 2013. Of course, this means some 40% intend to keep at current-gen releases after next year. To that point, an anonymous developer told IGN, “I would not be surprised if something atypical cannibalizes the market, maybe even the Xbox 360 itself.”
From a hardware perspective, nearly 80% of respondents said Microsoft’s next console is the easiest to work with, and the overwhelming majority suspect it will be the sales leader over the next five years.
This presents an interesting opportunity for the next Xbox: It could come out of the gate with an established online framework in the form of Xbox Live, of course, with the potential to launch with a strong software lineup from eager and capable creators. After the self-destructive launch of the PlayStation Vita, Sony may not be able to convince developers that their games will sell. Having an impressive opening must be on Microsoft’s mind more than ever, and having a console that's easy to work with could help.
The ease of use compared to other consoles is assuredly attractive, too. By comparison, 63% of developers who spoke to IGN said the Wii U would be the most challenging platform to develop for. One creator went as far as saying, “we won’t be working on Wii U due to these complexities,” while another lamented the difficulty of moving innovative games unique to Wii U to other platforms. This poses the question: Will Nintendo once again need to rely primarily on first-party games to propel platform success? At any rate, the Wii U’s 2012 release window gives it a distinct advantage: time.
I downloaded 30GB of game data yesterday. What's the big deal? People in places with third-world connections will still buy physical, but nearing the end of the next console cycle, most people living in dense cities will have better bandwidth and less latency from the nearest CDN server than they can get from the optical drive.
Unless they kicked out all the engineers with him, they should have the ability to determine cost savings at a given node and identify where there's a benefit. Accordingly, there wasn't this gen.
But aren't we seeing less than those sorts of savings?
GPUs are monsters, with as many transistors as can be crammed onto them.
A $400 console can't get its BOM below $300 through die shrinks within the normal lifespan of a console.
Presumably they are in a position to charge a price premium because people can't get their chips made elsewhere. The costs are rising not because some company is just pushing them up to make more money, but because the difficulties (and associated costs) in manufacturing are increasing, driving the price up (thanks to lack of competition because rivals can't manufacture at these nodes because it's harder).
Exactly. If you run the risk of not being able to change the expensive chip to a cheap one through process shrinks, start with a cheaper chip in the first place.
GPUs are monsters, with as many transistors as can be crammed onto them. Larger fabrication nodes means bigger, hotter chips - they just can't be made at larger nodes. So instead they are made at smaller nodes and at great cost, alleviated somewhat thanks to binning across performance levels.
•entire creation process streamlined,Designers can edit and recompile source code without leaving the editor,take control of the game anytime they want.
•"new interface empowers designers to tweak basic programming without having to call over a programmer,Tech artists will be able to create complex assets and programmers can expose certain values for designers as needed to give them access to simple tweaks"
Link please
Most important stuff IMO, the really important one.
But it would be usefull no matter the target HW, no need for next gen console we need this now (possible powerfull dev kits/PC for the editor though)
Nice gfx in that gif, still not very impressive IMO, but I like the particles and fire fx.
I will wait for work made by guys with better artists.
Llano had a very tough time last year.
That's the sort of story that would scare people away from making the same choices, although many factors went into that.
At the same time, Llano's pricing during the good-die period of its wafer supply agreement is suspect. GF was eating a big chunk of the losses AMD should have taken if it were anyone but AMD.
Interesting, and for once this is original work (lulz) being I'm outing it all by myself, I randomly noticed in this E3 prediction vid an apparant editor of Xbox World says Durango is a "monster" and "just incredible" according to what the devs have told them of the specs.
http://www.computerandvideogames.com/350515/video-cvg-predicts-microsofts-e3-durango-splinter-cell-6-ryse/
Around 3:20 of the vid on that page.
Throw it on the pile.
Cape Verde would be a monster versus Xenos, especially as you leave eDRAM out of the equation, implement a real tessellation scheme and make 1080p a real possibility too.
Bottom like though is magazines like that would probably be amazed by 4x Xenos, a "monster" 1TFLOP GPU OMGZ!!