Because first gen old console to new console ports tend to suck no matter what.
I am not even sure if I would personally want any MSAA hardware in the next generation console. It of course depends how much die space all that extra hardware takes (we have to assume MSAA would be at least 8x). I would take double ROPs (double fillrate) any day over 8xMSAA hardware (double fillrate requires also less BW than 8xMSAA). Without MSAA the EDRAM would offer less benefit. Hierarchal Z-buffering techniques and Z/color compression techniques have evolved since last generation consoles. With all the new bandwidth saving technologies in place, it would likely be more cost effective to simply have separate graphics memory, instead of EDRAM. However having UMA (shared GPU&CPU memory & memory bus) without having EDRAM sounds like a suicide. And without UMA you will likely have more GPU->CPU latency, since you have to transfer data from graphics memory to main memory (mixed CPU/GPGPU calculation would suffer). As long as the extra latency is under one frame, it doesn't matter for most algorithms (virtual texturing for example).Would you give up say a bank of 200 shaders (AMD style) in order to have say another 20MB of ED-RAM on top of everything else? Sure it's nice to have but how much of a tradeoff would you be willing to make to get it, assuming you're designing a system for yourself.
No you don't. From what I know in PS3 both GPU and CPU can directly access each others memory banks. It's not as low-latency as their local memorys but still fast enough.And without UMA you will likely have more GPU->CPU latency, since you have to transfer data from graphics memory to main memory
Part 3, it seems that larrabee failure pretty much convinced every body that the idea is insane till I'm not.
As far as i know the SGX does not support SM5. So no tessellation or displacement maps.
Indeed I could see flash being a viable medium for the gen after as physical storage.
Yes, you will get more latency. The amount of extra latency of course depends on the memory architecture. I doubt we will get as high latency as PC hardware does (data needs to be copied over PCI Express bus), but if the new console is using more mainstream hardware (with completely separate graphics memory), we have to expect a higher CPU->GPU->CPU latency as well.No you don't. From what I know in PS3 both GPU and CPU can directly access each others memory banks. It's not as low-latency as their local memorys but still fast enough.
Thats a good question and we have only little informations*.. like 1 core powerVR 6 have almost same performance of 2 cores powerVR 5 SGX543(MP2) at same clock (for smartphones generally 200MHz) and new shader archtecture called USSEx **..and they have Caustics Ray-Tracing soft and hardware engine ( http://www.caustic.com/ ).What about Series 6 ? Will it ?
The cell went with the cheapest solution possible, it lets the burden of moving data on the programmers only. It's neither a successful bet as Cell roadmap has been terminated. Ok it actually shipped and sell a bit, but that's not on the PC market, say it aimed the PC market
Blabla point is IBM cut it, toshiba cut it and for Sony it's still unclear. The no road map no further development that's it, linux development have stalled really early etc.cell doesnt need a roadmap, all it needs is a mass market device to be put in
once kutaragi left, it no longer had to be aimed at the pc market. ps3 could possibly sell another 50 million consoles in the next 5 to 10 years and have an install base of 100 million, which would make it very successful, as for this case of 'not worth the investment' - if it was in ps4, they could recoup the costs over 2 generations and install base of possibly 200 million
as with ps3, sony "could" throw whatever they want into the ps4 and know that devs are going to have to learn it just like ps3, might as well throw in something they already know, also porting between ps3 and ps4 would be made easier with cell (along with backwards compatability)
also, your "cheapest solution possible" is not the case at all, silicon used for managing data is silicon that could have been used for execution, cell is about processing power, at the time intel was still releasing core duos
Definitely. I've never understood why people suggest Power7 when it's a server processor. Well, I guess it's a higher number than other Power cores, so must be better.
I also wonder if ARM can't be wielded. As long as they have a scalable system where a console could use loads of cores, it could do the job. Effectively being what Cell set out to be, a scalable architecture with portable code between devices, but starting from the power efficient end and scaling up, instead of Cell which started from the high performance end and never managed to scale down competitively. ARM cores are suitably titchy; they'd just need the right topology. this would be the best solution IMO. The same cores could be used in a mobile, tablet, netbook, laptop and console, running exactly the same code, just scaled accordingly, without needing middleware getting in the way of code efficiency. One architecture to rule them all! Combine it with SGX's scalability, and the final console may not be the most powerful, but if it was bytecode compatible with the handheld and tablet versions, it'd offer superb value. Buy one game, play it on all devices you own. Stream content from one device to another. The requirement for a single set of service and application code would make maintaining complex device interactions nice and easy, unlike maintaining a middleware like PSS across very varied hardware.
OOOE will fit in better with big developers with a mix of abilities, allowing junior programmers to churn out functional, if not pretty, code that the processor does a reasonable job of running, with critical functions given over to the senior programmers to write. Given the complexity of modern development, a console that can offer cheaper, easier development is going to be attractive.
I've been wondering how high can one actually clock those newer ARMs. I know the A8 in my N900 has been OC'd from 600 to 1700MHz. With decent cooling I don't think it's out of question to see ARMs at well over 3GHz.
IBM sources claim that the new multi core PowerPC processor, which Big Blue, has spent several years developing is now part of a joint development project with the Japanese Company.
Sony recently moved to buy back the Toshiba Cell factory in Nagasaki for $600 million with some tipping that this will be used to manufacture the processor for both Sony Toshiba and IBM devices.
The new 32nm Cell processor is tipped to be capable of up to 16 SPEs which is twice as fast as the current Cell processor according to IBM leaks.
Japanese sources claim that Sony is gearing up to manufacture the Cell processor in bulk with some analysts tipping that the new processor will also appear in Sony notebooks and built into new Sony Bravia TV's.
Was it inside a tablet or netbook that has nearly non-existent cooling and most likely <<20W TDP? Allow it to eat some 100W on the initial release and then it gets interestingThe stuff I was reading yesterday was talking about A15s going up to 2.5gHz
Sony Banking On New Super Fast Cell Processor For PS4 & Bravia TV
http://www.smarthouse.com.au/Gaming/Industry/F5C6F8A6?page=1