Yoshida confirms SCE working on new hardware

Status
Not open for further replies.
The cell basically is dead if rumors of all 3 partners have stopped active R&D on Cell are true. Without continued active funding of R&D it's basically a dead end.
PowerPC is still actively developed and SPUs wont be able to change much on its architectural level as they are very strictly specified. But if Sony aint spending money on a Cell2 now already, then it surely is a dead end. Nothing is known how Sony is spending their nextgen research currently, except that Cell as its own computing platform is dead - but that could also mean that the "Cell2" can be forged more closely to Sonys desires (if you are optimistic).
And as to MLAA, any CPU can do it, there isn't anything particularly special about Cell that makes it impossible to do on other architechtures. You may have a valid point that at that clock speed Cell may currently be the fastest at MLAA, but considering other architechtures scale much higher in clock speed and up to 6 (soon to be 8) "real" cores, I'm not sure that's going to be a very good argument for the next generation of consoles.
Replace Cell with GPU and you can do everything a R700 on a much faster clocked CPU aswell, albeit with laughable speed and efficiency.
A Cell respin (I think at 65nm) was shown to run at >5GHz speed and if you take a 6-8 core x86 CPU, how much SPUs would fit in that transistor/die budget? 20? over 32? I dont think any of your arguments hold up, atleast not on tasks like MLAA that fit Cell.
You are comparing current x86 CPUs with a design based on a >5 year old process.
If you can fit tasks to run within the constraints of a SPU/Cell model they run way faster than on traditional CPUs with similar die/transistor/power budget.
However, the security system of Cell is probably the best I've seen. Hopefully it makes it into other CPU designs as I'd hate to see it die with Cell.
Im pretty sure Sony will use something similar on their future consoles and handhelds (not saying SPUs... but isolated programmable cores).
 
Well, the JTAG exploit was pretty much a sign that they goofed pretty hard. CPUs have had proprietary lockouts on any advanced JTAG/debug functionality for quite some time.

The root of trust model is effectively the only model that works.
No effectively their model is the only one which worked ... the rest is theory.

As far as theory goes the exploits for TXT, which is a lot newer than the PS3, are a lot more practical than the glitch attacks on the PS3 (AFAIK they haven't been able to find bugs and/or drive encryption details necessary to develop a mod chip).
 
No effectively their model is the only one which worked ... the rest is theory.

As far as theory goes the exploits for TXT, which is a lot newer than the PS3, are a lot more practical than the glitch attacks on the PS3 (AFAIK they haven't been able to find bugs and/or drive encryption details necessary to develop a mod chip).

And none of the TXT exploits(2 and actually varients) work on a defined configuration system. They all rely on effectively falsifying the configuration information which isn't possible when you are in a known configuration. In fact both of them rely on a software flaw that utilizes a non-signed config even though a signed config is available.
 
PowerPC is still actively developed

pa-risc and alpha are still actively developed too ;)

A Cell respin (I think at 65nm) was shown to run at >5GHz speed

And a no, people need to learn what a schmoo plot really means and is really for.

If you can fit tasks to run within the constraints of a SPU/Cell model they run way faster than on traditional CPUs with similar die/transistor/power budget.Im pretty sure Sony will use something similar on their future consoles and handhelds (not saying SPUs... but isolated programmable cores).

If you program fits within the constraints of my infinite FP processor, I can deliver infinite flops.
 
pa-risc and alpha are still actively developed too ;)
POWER7 was released this year. My bad for calling it PowerPC I guess.
And a no, people need to learn what a schmoo plot really means and is really for.
Cant find any better references now :
http://domino.research.ibm.com/tchj...fe63d7465faf3f9e8525733f005f667f!OpenDocument
correct operation (SPUs only, labcondition, yadda yadda) up to 7.3 GHz at 65nm SOI. Not a far fetch to imagine Cell could reach operating frequencies meeting and exceeding current Intel CPUs.
If you program fits within the constraints of my infinite FP processor, I can deliver infinite flops.
And whats that? arbitrary complex operations with a * 0.0 at the end?
Cells real, MLAA is real, your madeup stuff aint even funny or worth replying to. [MOD] NO INSULTS [/MOD]
 
POWER7 was released this year. My bad for calling it PowerPC I guess.
Cant find any better references now :
http://domino.research.ibm.com/tchj...fe63d7465faf3f9e8525733f005f667f!OpenDocument
correct operation (SPUs only, labcondition, yadda yadda) up to 7.3 GHz at 65nm SOI. Not a far fetch to imagine Cell could reach operating frequencies meeting and exceeding current Intel CPUs.

Intel CPUs can schmoo rather high if they want them too. There are other more important things than schmooage.


And whats that? arbitrary complex operations with a * 0.0 at the end?
Cells real, MLAA is real, your madeup stuff aint even funny or worth replying to. Which I probably wont again, given that you either just troll around or lack basic reading comprehension skills.

I'm kinda partial to x^inf.

Cell is largely NOT real, it can handle a small subset of specialized graphics operations not nearly as well as a GPU and for the other stuff its worse than a CPU. It really doesn't have a place.
 
Cell is largely NOT real, it can handle a small subset of specialized graphics operations not nearly as well as a GPU and for the other stuff its worse than a CPU. It really doesn't have a place.

As others have pointed out, not nearly as well as what GPU? The Cell didn't just come out ... SPUs seem to be able to do a lot of stuff that is relevant to gaming very good. Even if the CPU would be better on one end of the spectrum, and GPU on the other, if Cell or a follow up could do either well enough, then it has the advantage of being able to shift all its capacity to one or the other direction depending on what is needed. There was a time, not so long ago, maybe even today, where that kind of quality was highly praised in a certain famous little ATI graphics card.
 
CPU's are not very relevant in consoles, unless they're in a console with a crappy GPU and asked to help out. Cell's unique situation helped the PS3 at the time, but next gen things will be different. The right way to design a console is to start with the GPU and design it properly so that you don't have to rely on a fast but hard to program CPU to get good results. CPU is a second concern.

Next PS3 could possibly have 4-6 superscalar PPE cores with some branch prediction and it'd be enough. They can also stick 6 SPU's there for BC, and for the later years in the console's life where the first parties would spend the effort to improve graphics using them, while initially the PPE cores would be enough for just porting. The OS running 7th SPU would not be needed since one of the PPE's could do that task much more effectively, and we'd be done with the CPU part.

6 SPU's are only 120M transistors overhead, which is much more worth it than x86 decoding overhead, useless in a console. x86 wouldn't happen anyways due to licensing and Sony not willing to be held hostage to Intel.

The GPU and the memory subsystem is the key and by far the most interesting point of discussion. What's there after GDDR5? Does XDR2 have a place in the future since it seems to be faster and more efficient for a given frequency and given number of pins, etc.
 
As others have pointed out, not nearly as well as what GPU? The Cell didn't just come out ... SPUs seem to be able to do a lot of stuff that is relevant to gaming very good. Even if the CPU would be better on one end of the spectrum, and GPU on the other, if Cell or a follow up could do either well enough, then it has the advantage of being able to shift all its capacity to one or the other direction depending on what is needed. There was a time, not so long ago, maybe even today, where that kind of quality was highly praised in a certain famous little ATI graphics card.

It has a more flexible core and model than GPU ones. Will need more software advances to take full advantage of the combo. The great thing is you still have traditional GPU cores to work with in parallel on the same frame in such an "equal/swappable" CPU + GPU model.

IMHO, this is the most interesting aspect in PS3.

The problem is developers are forced to use Cell in all situations when simpler approaches suffice.
 
Intel CPUs can schmoo rather high if they want them too. There are other more important things than schmooage.




I'm kinda partial to x^inf.

Cell is largely NOT real, it can handle a small subset of specialized graphics operations not nearly as well as a GPU and for the other stuff its worse than a CPU. It really doesn't have a place.

You sound like someone who would whole heartedly believe the evangelical Microsoft message that the GPU should do all the work while the CPU is free to do other tasks that they have been kicking since they convinced Sega to drop the second SH4 and went public in 2000 with Xbox.

Its really just Microsoft Direct X PR, next thing you know they will want gamers to run Futuremark benchmarks on a console (notice that there are no competing OpenGL benchmarks for over 8 years)

Traditional game consoles that are not part of the Microsoft Direct X software platform do not work like Microsoft wants them to or they would have gained more monopoly money.

PS1, PS2, PS3, hell Genesis, Saturn, and N64 relied heavily on a CPU+GPU team solution based custom software,

The Emotion Engine was based on accelerating 3d tasks and we never really got to see its full (we saw a good growing fraction) potential because of the early next gen shift (it takes years to write dev tools and years to make AAA games)

CellBE is undeniable, just because you chose to NOT read up on well documented interviews and in depth tech documentaries like The Making of Killzone 2, Mastering the Cell, etc does not mean that your judgement in not being important or matters is justified.

your argument is an endless rantfest that is polarized or biased to one side, con.
 
It has a more flexible core and model than GPU ones. Will need more software advances to take full advantage of the combo. The great thing is you still have traditional GPU cores to work with in parallel on the same frame in such an "equal/swappable" CPU + GPU model.

IMHO, this is the most interesting aspect in PS3.

The problem is developers are forced to use Cell in all situations when simpler approaches suffice.

Yes, or develop games for DirectX9 where such an approach is explicitly impossible. Had DirectX11 come out five years sooner ... ;)
 
Yes, or develop games for DirectX9 where such an approach is explicitly impossible. Had DirectX11 come out five years sooner ... ;)

Its confusing to think about just what you are saying here. Do you really think that Microsoft could have set and released their requirements to their trademarked, proprietary Direct X API five years ago supporting all of those check list features with GPUs that were limited to 90nm (2005) just how would transistor counts play in that field?

I can see and individual GPU maker engineering similar proprietary specs in hardware ten years ago but that would also imply that you as a developer would have to use custom software tools... we don't have a magic time machine.

And what makes you think that IF say Sony and Microsoft would have agreed to wait to launch consoles in 2010 (with your DirectX compliant GPUs) that PS3 would work any different? ie they would still be using Cell (a 45nm derivative by this alternate history) to accellerate 3d features and worst of all is the fact, the unfortunate fact that current GPUs are ill equiped to be placed in a home console at their current 40nm process, they draw too much current, and generate way too much heat and therefore would need to be shrinked a couple more times to make realistic sense and not drive the console cost over $1000 dollars.

On that note current consoles despite their die shrinks still use heavy duty copper heatpipe heatsink solutions or just have a swiss cheese air flow venting design. and btw I personally believe that this current gen was started 3 years too early and there are so many competitive angles behind that, obviously Microsoft would not be too happy with the competition having access to similar tech since there are only two main GPU companies and GPUs have become ridiculously expensive to make compared to over 10 years ago.

Direct X11 is not some magic potion that cures all ills just by drinking it, its trademarked marketing propaganda, it works as per Microsoft's design and theirs is a single minded mentality, other consoles that are following the traditional console design are opting for OpenGL custom for ease of use or just low level software that someone used to working under Microsoft's mentality would obviously hate claiming no matter how advanced, that its too hard.

Its not hard to believe that Microsoft would not support anything other than Microsoft products, they definetly have tried to eliminate OpenGL and have been pretty successfull in preventing any other custom APIs from rising like PowerVR and 3dFX.

They got alot of money invested in it and competition for them would be nightmare if more and more developers decided to take the OpenGL or worse custom low level dev tools. on that note back in 2004 Sony and Nintendo were sweating bullets at a next gen launch so soon but given the possibility of a 3 year nightmare lead, they would have still come out with more advanced tech, the thought of Nintendo having access even to G71 as their gpu in 2008 would give nightmares to Steve Balmer and Bill.
 
You sound like someone who would whole heartedly believe the evangelical Microsoft message that the GPU should do all the work while the CPU is free to do other tasks that they have been kicking since they convinced Sega to drop the second SH4 and went public in 2000 with Xbox.

Where has this dual SH4 DC come from? Sega wanted a cheap and easy to develop for system, why would they have been trying to cram more CPU's in there?

And given that the DC needed it's CPU for T&L, because its GPU couldn't do T&L, why would MS be trying to convince Sega to do that on the graphics chip instead?

Its really just Microsoft Direct X PR, next thing you know they will want gamers to run Futuremark benchmarks on a console (notice that there are no competing OpenGL benchmarks for over 8 years)

Evil Microsoft and their evil OpenGL HAL, [strike]allowing[/strike] forcing people to do things on GPU that would be better done 100 times more slowly on the CPU. Yes, if only MS wasn't forcing people to use Glide to do things faster off-CPU!

the thought of Nintendo having access even to G71 as their gpu in 2008 would give nightmares to Steve Balmer and Bill.

The thought of a 2008 console using g71 would give me nightmares too.
 
Its confusing to think about just what you are saying here.

Just saying that even if GPUs started having GPGPU functions quite a while ago, as there was no forward looking DirectX release to take advantage of such a setup, with DirectX9 remaining the lowest common denominator for a long, long time, and Vista's failure extending that even further.

You see right now that developers who are targeting 11 get opportunities for performance gains that are similar to how you would program for Cell + GPU. You can see this in D.I.C.E.'s later presentations for example. Plenty of discussion there on what I'm getting at.

I'm not blaming anyone here necessarily - it's just that PC development in general had a big impact on how the PS3s architecture was (under)used by third parties. It wasn't meant as serious as you are taking it now, mind, I know very well that first party studios had trouble adjusting as well. I just wanted to point out that the consoles and particularly the PS3 was far ahead of its time in some respects that are only now with DirectX11 finally possible on PC (but will probably still take a while before becoming as well supported as Directx9). And with PC and 360 having become such a big item for multi-platform development, and all the troubles that Directx9 has had in progressing towards Directx11, the PS3's 'outlandish' design has stayed outlandish for far longer than it could have been.
 
Where has this dual SH4 DC come from? Sega wanted a cheap and easy to develop for system, why would they have been trying to cram more CPU's in there?

And given that the DC needed it's CPU for T&L, because its GPU couldn't do T&L, why would MS be trying to convince Sega to do that on the graphics chip instead?

This came as rumor report (that stated it was dropped) from a 1997 or early 98 Next Generation magazine citing as you just pointed out that the double edged sword was a second SH4 could take the T&L duties while the other would be free for other tasks theoretically increasing performance, following up on the 4 some years of dual SH2 programming experience Sega's in house dev teams had but the downside would be certain 3rd parties complaining, and Microsoft being "hired" to make the console friendlier to developers back in an age where dual cpu setups was seen as evil or demonized and those 3rd parties that were really coming from the Microsoft OS PC Platform its easy to see where the influence was really coming from.

Of course later with Xbox 1 Microsoft marketed the idea that developers would gain freedom to do whatever they wanted by having the GPU do only graphics and CPU (singular) free to other tasks.

Evil Microsoft and their evil OpenGL HAL, [strike]allowing[/strike] forcing people to do things on GPU that would be better done 100 times more slowly on the CPU. Yes, if only MS wasn't forcing people to use Glide to do things faster off-CPU!

Call it or ridicule it how you want, would it be fair if I directly called you a Microsoft evangelist?

The PC's default Operating system as it currently stands for the last 15 years is dominated by Microsoft proprietary software APIs, sure they get legal complaints, but they have tried to eliminate any type of innovation that would not come from Microsoft.

Back in 1995, then videologic's first powerVR chipset had its own api, Nvidia's Nv1 had its own, and so on, Microsoft released the rule of compliance to "direct3d X" to effectively terminate. 3dFX managed to slip by with glide, getting distracted with Sega (similar to nvidia enginneers not being able to focus on nv30 since they were finishing nv2A) getting too full of themselves (yet they had alot of dev support quite the splendid threat to MS) and compliance to Dx specs as well as competitors who did not have independent blessingns and chose to focus on compliance to Dx specs...and then you get to opengl which is really not a microsoft proprietary trademark and instead encourages choice.

have we seen Microsoft trying to encourage OS competition or is it really what it is, monopoly.

The thought of a 2008 console using g71 would give me nightmares too.

I said that in the context of Nintendo using it and since 55nm process was the standard and possibility of such a G71, it would have been ideal, even if it was clocked at 430Mhz and crippled to a 64bit bus because lower power draw, cooler running would have been priorities yet they would have had far better check list features. ;)

Just saying that even if GPUs started having GPGPU functions quite a while ago, ---boldedas there was no forward looking DirectX release to take advantage of such a setup----bolded, with DirectX9 remaining the lowest common denominator for a long, long time, and Vista's failure extending that even further.

You see right now that developers who are targeting 11 get opportunities for performance gains that are similar to how you would program for Cell + GPU. You can see this in D.I.C.E.'s later presentations for example. Plenty of discussion there on what I'm getting at.

I'm not blaming anyone here necessarily - it's just that PC development in general had a big impact on how the PS3s architecture was (under)used by third parties. It wasn't meant as serious as you are taking it now, mind, I know very well that first party studios had trouble adjusting as well. I just wanted to point out that the consoles and particularly the PS3 was far ahead of its time in some respects that are only now with DirectX11 finally possible on PC (but will probably still take a while before becoming as well supported as Directx9). And with PC and 360 having become such a big item for multi-platform development, and all the troubles that Directx9 has had in progressing towards Directx11, the PS3's 'outlandish' design has stayed outlandish for far longer than it could have been.

Microsoft controls and decides what they feel like with direct3dX API compliance, directX is not really hardware, its an API that is hardware agnostic so even if ATI has truform in R200 or tesselation decendant in the C1 and 2900HD Radeon it is NEVER going to be used by any developer unless they break Microsoft's API or use a competing API or go to custom low level tools.

I'll pose another "what if" scenario, we all know Nintendo contracted ArtX, within a short time ArtX was bought out by ATI and we know R200 and its truform was not perfect but ATI dropped R250 in favor of focusing on ArtX's R300. Well, contract somehow gets renegotiated, console delayed (I know highly unlikely) but ATI insists on convincing Nintendo would have an edge with R250 in a delayed gamecube (that would also get a faster cpu in the process and more ram but most likely same mini disc) now nintendo would have a GPU clocked at 300Mhz depending on 150nm or 130nm but they would have the direct competitor to Nv2A only they also have Truform.

Its obvious a custom API is going to have to be used and one that is not Microsoft trademarked, do you really think xbox1 would have had bragging rights for 4 years of tech superiority in shipping games?

Getting back more currently, don't you see the strategy of forcing a next gen in 2005in Microsoft's part? 90nm C1 and Nv47 ran very hot and sucked mad electro juice as per tech limitations, G80 was out of the question unless a significant die shrink would be done and my estimation is 55nm and using G92b instead, you wanna use GT200b @55nm? no you cannot you have to go to 40nm and maybe lower.

Its true that wow Dx11 now supports all of these advanced features but was it in the interest of Microsoft that G80, G92b, GT200b or fermi land in Sony's lap without some realistic thermal and power draw and TIME cost?

Next generation xbox and playstation are going to have GPU feature differences because just like how MS was able to exploit having a unifiedshader pipeline they are not going to hand over the door to Sony but placing too much strength on directX or Dx11 alone is kind of ridiculous, its still just an api not a magic mushroom.
 
Am I the only one not understanding the point of these rants? Is Microsoft being blamed for any perceived weakness in PS3?

I also don't understand what the failure of proprietary hardware APIs has to do with Microsoft. Microsoft provided an hardware vendor neutral API but even if they hadn't OpenGL would have killed Glide, etc. because unlike Windows graphics hardware was not a monopoly and consumers didn't want it to be a monopoly.
 
Status
Not open for further replies.
Back
Top