Can they be for your linear extrapolation on transistors? Keep in mind, yields for the 360 chips were abysmal at launch and for some time after. Their cooling solution left much to be desired, and how many units have failed despite their additional heatpipe in Zephyr
You're missing the point, or at least not addressing it. As has often been mentioned, the roadmap for transistor scaling becomes muddy beyond 22nm. Future cost reductions are at stake, and the designers cannot bet on good die reductions as they have in the past. Power density, static/leakage power, pads/power supply, analog components... At sub-20nm, designers will be looking at rising importance of quantum effects... i.e. easier said than done.
Thats why I looked at designs that work perfectly fine on 65nm. Intel is readying 32nm for 2010 , TSMC apparently has 40nm ready and rumor sugests the refreshes of the 4850/70 are on that process along with ati's dx 11 chips. IF a 1.4b tranistor chip is avalible on 65nm and while it does use lots of energy you still have a 55nm drop a 45nm and a 40nm drop before the 32nm drop . 1.4b on 40nm shouldn't be a problem.
I'm also not sugesting that the chip be 1.4b tranistors. I was thinking more of a 900m tranistor chip (in line with the 4870 but a dx 11 variant based on the xenos) and the other 500m uses on edram. If the 10MB of edram in the xenos is 100m trasnistors then rough math says you can place 50MB in that 500m tranistor space.. That should prevent tiling at 1080p with 4x fsaa but i'm not good with that , someone better with the math for it can tell us. Regardless though it doesn't have to be 500m tranistors , they could go 600m or even less or use one of the new forms of ram.
They can make it a daughter die like they did with the xenos.
I also envision that the cpu will be much smaller and less important than the gpu and i would think 500m tranistors or so.
32nm and 22nm shoudl provide enough of a drop in power ,heat and costs through its life time. If as many of us expect next gen may last alot longer than this gen.
Well you do have the fundry too (in MS side not sure about Sony) also IBM/AMD and if they are not making proffit right now, soon you will have MS too making proffit in each console sold., not so diferent in the end.
29-Jan-2009 11:20
perhaps , the foundry does make a profit and i'm sure that ati get some cash and ibm. Though i'm sure that what ibm and ati get is much less than what ati charges the board makers. At some point the 360 will be sold for a profit , but we have no idea of when that is and we do know its been sold at a loss for a long time and may still be sold at a launch. However on just one process shrink the 360 has already droped in cost by half.
I am not sure I understand you post But I do expect a XB3 to cost more than a 360 in 2011/12.
Thing is if you are to release a console in 32nm, you will want it to cost cost as litle as possible at launch because wit will be hard to reduce cost later (but not impossible).
Anyway like I showed before inexpensive HW from today is already quite competitive with a 360.
I'm saying that the geforce gt 280 is from today , its actually almost a year old i believe and its 1.4b tranistors on 65nm . I don't see why MS couldn't go with a similar tranistor budget on 40nm or even 32nm
Why would a new generation console limit itself to KZ2 level visuals? While it would probably be easier to reach those with a more powerful machine, there would be pressure to get more out of that machine and you're back to a similar situation to what we have currently.
Thats my point. A 2011 system would easly out do killzone adn the effort required would be almost non existant. The amount of time and effort they put into the ps3 to hide the limitations of the texturing capablitys and other limits to get killzone to look as good as it does would be removed from the equation.
1) If you desire to get "easier results" for a game that looks "as good" as KZ2, why even increase the hardware if that's your benchmark? Further more, no engine is infinitely scalable, so if you increase the hardware to such a degree, you *have* to redo your framework and create the engine to utilize the hardware, unless you are using some sort of middleware, in which case there is still a lot of work to be done.
Sure, WoW runs on a multitude of cards, but how many of them actually "significantly" improve the visuals of WoW? What do you gain playing a game like WoW on a high end system, aside from some AA and high res/frame rate?
Have you seen what the crytech 2 engine is capable of on todays hardware ?
these are all user mods btw
http://teamyo.org/xzero/Images/Crysis/Levels/New Realistic Forest/NewForest5.jpg
http://i35.tinypic.com/qodb39.jpg
http://www.youtube.com/watch?v=6djX...d.com/showthread.php?t=48584&highlight=crysis
and lets not forget the toy shop demo from ati based on the x1800 tech
http://developer.amd.com/media/gpu_videos/toyshop.html
you don't need an infinate scaling engine , you just need a good engine that can be modded to take advantage of new features in hardware.