Predict: The Next Generation Console Tech

Status
Not open for further replies.
nAo said:
I'm also wondering why Microsoft and Sony have not provided to developers optimized shader compilers for their CPUs.
Sony had CG compilers for SPUs very early on (before PS3 was even publically announced). Why none of that made it into the wild one can only wonder. As you said, the model can be a useful one for problems well outside graphics.
They've spent a lot of resources to improve toolside(and that obviously helped) but the lack of cohesion in the approach still strikes me as exactly like the PS2 days.

In order to have an efficient software renderer based on rasterization and texture mapping you'd need to drastically modify CELL architecture.
Less drastic if you stick with deferred model (by deferred I don't mean just shaders). But yea, current arch. wouldn't cut it anyway, even IBM already admits some of the more obvious flaws in recent patents (opaque SIMD and so forth).
 
Last edited by a moderator:
Well, I'm wondering why we don't have something like CUDA (at least a subset of it..) on CELL, what were/are they waiting for?
I'm also wondering why Microsoft and Sony have not provided to developers optimized shader compilers for their CPUs. Texture sampling aside (which doesn't map well to 360 and PS3 CPUs) modern shaders programming model can be used to tackle a lot of different problems, while being simple to use, fast to compile and relatively easy to debug. It seemed to me such a straightforward thing to do, but I guess I was wrong :)

propably becuase MS is a software vendor and concentrates on software level tools such as IDEs etc and sony is a CE vendor and in general doesn't have much experience with the low level developer enabling. Contrast MS/Sony with the silicon vendors such as ATI/Nvidia which have both experience and a vested interest in enabling new advanced ways of programming their designs.

All the GPGPU stuff came from universities and the research side.
 
It's already difficult to convince ppl with a 360 AND a PS3 devkit on their desks to work on PS3 (with visual studio), if you take it away from them you'd better cancel the PS3 version of your game too :)
Sad but true..


In order to have an efficient software renderer based on rasterization and texture mapping you'd need to drastically modify CELL architecture. I just don't see this happening.


That's not a very nice thing to say about people nAo :)

Anyhow you can rig MSVS yourself ( and still have to deal with an unhealthy number of things which are PS3 specific and aren't integrated into MSVS ) or you can blame IBM if anything goes tits up in RAD...which would you prefer?

I was speaking to a uniform framework/toolchain..a "do it all here" solution for those who just can't stomach having to leave an IDE for a moment. I am also advocating putting IBM on the hook for tools development...they've been playing on this front longer and gotten better results than Sony. Sony should act like they DESERVE the same integrated support as IBM's other customers. It's perplexed me from day one.

( I still say RAD can hang with MSVS any day of the week...but I see your point )


--------------

As for my software rendering comment, please note that I said Sony and IBM would have to collaboate...take a guess as to what IBM's role would be? "Changing" Cell so that it COULD replace a GPU. I realize as Cell is now it would not cut it. Regardless of software rendering or not Cell needs significant updates in a number of areas as I stated earlier. I've already advocated changes in Cell merely to help developers out...I wouldn't stop there if Cell were meant to completely replace the GPU in the system as well.

The reason I target Cell2 for Sony is because it is the logical choice for them. Do you or anyone else truly think Sony will adopt Larabee if MS does so as well?

I simply can't see it happening.
 
Last edited by a moderator:
That's not a very nice thing to say about people nAo :)

It's actually a relatively significant problem. Every generation, you get more PC guys coming over to console development. Many of them are exclusively Windows developers (maybe with some Amiga background) and they have used VS for ages. They are very comfortable and efficient with it, as they have invested a lot of time in learning their tools. Hand them SN's ProDG (which is the better debugger, hands down) and they are lost. It's the same thing with the SDKs themselves, really.

Basically, all of these things have learning curves and pretty steep ones at that. I know that there is this public image of the 16-hours-a-day-no-challange-is-big-enough-devs and the hidden perception that maybe we're all lazy, but honestly, what do you expect? Not everyone in the industry wants to spend their entire life working. And if you do, maybe you want to do something that results in a better game and, you know, increased profits. Learning another complicated IDE seems like a waste of time to many people.

[Disclaimer: Reading over this again, I think it might come across defensive. It is not, really. I just know a good chunk of good game developers who are not absolute tech-heads and who, as nAo said, basically refuse to touch the PS3, simply because they don't know the tools.]

That said, MS has a huge advatage here. VS is a great source editor, especially if you add VisualAssist. It will be hard to beat that without MS dragging you to court for making a verbatim copy of their interface. :)
So, it's less about being powerful then about be powerful and accessible. Nobody can waste time.
 
[Disclaimer: Reading over this again, I think it might come across defensive. It is not, really. I just know a good chunk of good game developers who are not absolute tech-heads and who, as nAo said, basically refuse to touch the PS3, simply because they don't know the tools.]
And it's a rational decision. If a keyboardist wants some guitar tracksm, they've got three options -
1) Spend a bit of money and learn guitar. Will sound brilliant, but how long is that going to take?
2) Spend a bit of money on an artificial guitar simulation synth, which is good-ish, but not great.
3) Pay a fair bit of cash to a guitarist to provide the guitar track, which only works for one track.

It's not at all crazy to see keyboardists buying the easy hack that works with the system they know than go to the extreme of learning a whole new system, or splashing out on contracting additional artists. It's a workable compromise. You wouldn't call a musician lazy for spending his time working on the keyboards he knows instead of learning whole new instruments. And you shouldn't call a developer lazy for spending his time on the systems he knows instead of learning whole new ones.

There of course a different dynamic in the software biz, and developersx will feel pressure one way or another to follow a path, but the 'lazy' comment is utterly misplaced when applied to people working an office-hours job to pay the bills.
 
I find the music analogy interesting, I could go even further.
Say we speak of studio musicians, or young/unproven bands.

You will find that a lot of them use the same gears, lespaul/ strat /etc. It's not that these instruments are perfet for every one, but producers know how a LP or Strat will sound in a marshall etc. It comes down to costs. And in the end quiet some bands sound the same...
 
Last edited by a moderator:
There of course a different dynamic in the software biz, and developersx will feel pressure one way or another to follow a path, but the 'lazy' comment is utterly misplaced when applied to people working an office-hours job to pay the bills.

We usually try to keep the "regular" people on the platform they like the most and make the tech-guys hop environments, as we kinda like that. This also has the added benefit of saving money on devkits. ;)
Of course that comes at a price, one that the "lead on PS3"-faction is rather uncomfortable with, namely that someone developing on a forgiving platform will give you some less than optimal code.

But this belongs more to the Multiplatform discussion, really.

Nice analogy, BTW.
 
The analogy is nice but I don't believe is very much right.
This is not about some musician that has to play a completely different instrument, we are talking about a keyboards player that has to play a classic piano.

To debug code on PS3 just requires you to learn how to use a new debugger, which by the way is much more powerful than what developers have available on 360 (it has to be that way being PS3 a more complex platform to develop for..).
The funny thing is that all the most important keyboard shortcuts are replicated from Visual Studio, so someone can in theory start to debug on PS3 without even need to learn new commands/menus/shortcuts. But hey..windows layout is different, colors are different, this new debugger might bite you back, nonono...they are not going to use it, it might be dangerous for their health.
 
i do get it, it's basically like scales and arpeggios on guitar (for example).
It's always the same notes: tonic, third, fifth for a simple one but there are quite some different fingerings.
Changing pattern is not easy (I know I took some bad habits in regard to some positions... I wonder sometime if "unlearn"after some time might be tougher than learn).

Anyway... I stop with those comparisons... :LOL:
 
Last edited by a moderator:
The analogy is nice but I don't believe is very much right.
This is not about some musician that has to play a completely different instrument, we are talking about a keyboards player that has to play a classic piano.

To debug code on PS3 just requires you to learn how to use a new debugger, which by the way is much more powerful than what developers have available on 360 (it has to be that way being PS3 a more complex platform to develop for..).
The funny thing is that all the most important keyboard shortcuts are replicated from Visual Studio, so someone can in theory start to debug on PS3 without even need to learn new commands/menus/shortcuts. But hey..windows layout is different, colors are different, this new debugger might bite you back, nonono...they are not going to use it, it might be dangerous for their health.

Saying it like this sounds a case of unprofessional developers... it would have been like working at a CPU maker and when they tell you to place and route that portion of logic by hand you balk and stomp your feet on the ground... next step is a layoff notice. It cannot be that bad.
 
Saying it like this sounds a case of unprofessional developers... it would have been like working at a CPU maker and when they tell you to place and route that portion of logic by hand you balk and stomp your feet on the ground... next step is a layoff notice.
Doesn't that depend on the POV of the people in charge? If the coder says 'I can't work out these PS3 tools' and the employer's response is 'yeah, they're a dog, aren't they?' then the attitude is self-supporting.

Having not used either platform SDK I won't even hazard a guess at the differences, but I know in something easy like word processing, I'm so used to Word's way of doing things now that I don't want to even try anything else. The slight deviations in OpenOffice's Writer were sufficiently different for me to remove it before ever giving it a proper look. If someone ordered me to use it, I'm sure I'd adapt quickly enough, but as long as I have a choice I stick to the path well trodden. As long as a whole developer company shares a reluctant view on PS3's tools, they're not likely to seek change.
 
The funny thing is that all the most important keyboard shortcuts are replicated from Visual Studio, so someone can in theory start to debug on PS3 without even need to learn new commands/menus/shortcuts. But hey..windows layout is different, colors are different, this new debugger might bite you back, nonono...they are not going to use it, it might be dangerous for their health.

Yeah, keyboard shortcuts are very similar. There are even VisualAssist commands in ProDG, like Ctrl+Alt+O (I guess they both copied it form the same place. ;)). Anyway, remind me not to work where you are working. It seems painful. ;)
 
For PS4 main processor may be IBM's PowerXcell 32i which will out around 2011-2012.

1


This processor is the successer of PowerXcell 8i (the DP implementation version of Cell BE).
I've heard some source from SCEI now working with the next successor system (PS4)
plan to continue applicable Cell BE architect for beyond.

PowerXcell 32i consist two next gen Power based Core (may be Power6 ISA RISC base) with 32 SPEs (each SPE has 1MB local store with cache coherently design like labarree) So there are 2 nextgen Power based core consist with 32 SPE on die running around3.2GHz on those 45nm CMOS 10 Metal-Layer

The GPU may Nvidia future GPU with built-in PhysicX hardware. Clock speed not discuss now.

Rambus also team up with SCEI again on XDR2 controller and the I/O on CPU chip, Samsung now contacted to be OEM of its nextgen memory chips. The memory size for PS4 still not discussion. Some source from insider said the memory size for next PS will be 8 time of present product. ( 8 x 256MB equal 2GB) for more info let see http://www.rambus.com/us/products/xdr/xdr2.html


For those media slot may adaptation to 4-layer BD-ROM (200GB BD-R disc support) drive with 16X or higher speed.

PS4 will built-in SATA Version 3 for SSD HDD also the terabit controller from Marvell and all
stuff of USB 3.0 and 802.11 N bluetooth

This totally I know for next playstation machine now.
 
Last edited by a moderator:
Yeah, keyboard shortcuts are very similar. There are even VisualAssist commands in ProDG, like Ctrl+Alt+O (I guess they both copied it form the same place. ;)). Anyway, remind me not to work where you are working. It seems painful. ;)
I hear stories like this one all the time from friends and former collegues working in other companies, in USA and EU.
 
1MB of local store per SPU seems quite unlikely and not even that logical to me.

32MB of cache. Sounds like over kill here also. Wouldn't it make more sense to increase the cache on the gpu which if going with nvidia will actually have physx support . That kind of set up will allow them to get away with a weaker cpu set up.
 
1MB for local store per SPE compare to total 2GB XDR2 memory on CPU side are reasonable for its design. If you compare those with PS3 architect 256KB local store versus 256MB XDR memory. The next PS with outclass PS3 so far both memory bandwidth and memory access time latency. This will eliminate many hurdles appear on PS3 development by hardware itself and bring us the higher level of multi-core performace for next gen gaming.

SCEI has more lesson from PS3. by the time PS3 is far step from those PS PS2 and PSP.
They move out from MIPS camp to IBM camp. So they'll take a long time to push IBM architect to friendly as old MIPS LSI which SCEI work closely around 14 year since PS.

Another one on those GPU part. PS3 is the first console from SCEI that implement by GPU vendor rather than Toshiba NEC or other Japan custom GPU camps. These setup force most japanese developers to jump in to 3D rather than the sprites age. It damn hard for many
wellknown publishers from PS and PS2 age such TAITO(under SQEX), HUDSUN (under KONAMI) SNK and other publisher with small budget to riding on PS3.

SCEI estimated these issue quite well. So PS3 is only the beginning of Cell BE and nVidia GPU team up. The PS4 PS5 may better in term of japanese 3rd party support. Their 10 year life cycle of PS3 mean to these reason.

If you all can coding PS3 quite well. So you'll easily coding PS4 more quite well also PS5 and next. The performance of CPU and GPU power are continuing higher. So their are more challenge for us all when the next gen console arrive.
 
1MB for local store per SPE compare to total 2GB XDR2 memory on CPU side are reasonable for its design. If you compare those with PS3 architect 256KB local store versus 256MB XDR memory. The next PS with outclass PS3 so far both memory bandwidth and memory access time latency. This will eliminate many hurdles appear on PS3 development by hardware itself and bring us the higher level of multi-core performace for next gen gaming.

But how much would each spe increase in power. I thought the plan was to add more spe's. So you would have 32 of them instead of 8. If they were all as powerfull as the 8 in the ps3 you would need to increase the cache to 1MB. That would make a huge chip. Your already looking at a 4 times increase in processing power before any tweaks to the individual spes. I think perhaps 512kb a spe would be better.

SCEI estimated these issue quite well. So PS3 is only the beginning of Cell BE and nVidia GPU team up. The PS4 PS5 may better in term of japanese 3rd party support. Their 10 year life cycle of PS3 mean to these reason.
It depends on how the ps3 does , when the ps4 launches , the tools avalible at launch and what the competition is doing at the same time. We've seen countless times in the past with the ps1 , ps2 , genesis , nintendo that the most powerfull system doesn't mean the most support or greatest sucess

If you all can coding PS3 quite well. So you'll easily coding PS4 more quite well also PS5 and next. The performance of CPU and GPU power are continuing higher. So their are more challenge for us all when the next gen console arrive
Well who knows about that , will anyone be happy developing for a 128 or 256 spe chip ?
 
1MB for local store per SPE compare to total 2GB XDR2 memory on CPU side are reasonable for its design. If you compare those with PS3 architect 256KB local store versus 256MB XDR memory.
It's not about relative cache:RAM. LS on chip, whether true LS or cache, needs to be balanced for the workload the processing units are doing versus chip real-estate they consume. If the number of processors increases by 4x, at the same clock-speed a 4x increase in RAM BW would see the processing units fed just as well as they are now, all things being equal. The question is what will 1MB of LS per SPE gain for you instead of having either more logic units on the chip or a smaller, cheaper, cooler chip? Will 1 MB be enough and wanted to keep a second active thread going and increase efficiency, or will it just be a big lump of prefetched data because the system BW can feed the units fast enough?

Well who knows about that , will anyone be happy developing for a 128 or 256 spe chip ?
By the time you get to 128+ cores, surely even by the time you get to 32, you'll be needing virtualised processing resources and job scheduling. If those tools are effective enough, whether you have 32 cores, 128, or 2048, it shouldn't make any difference to the developer. They just have to throw jobs at the CPU and it'll churn through them.
 
The next version of CELL needs imho not just to address the local store 'problem' (which it's not a problem per se..) but the latency problem. While it's true that in many cases access to data can be streamlined this costs time and makes programs more complex and difficult to debug. They need to add some sort of hw threading to SPUs..
 
Status
Not open for further replies.
Back
Top