software cell distribution

McFly

Veteran
How likely do you think will software cell distribution outside the box be with the PS3 (I'm not talking about distributing it on the internet). I thought that was the main reason to go with the Cell technology and lately I just heard everyone guessing about TFlop numbers. What if there will be many versions of PS3, one with 256GF, but very cheap, one with 1TF, but around the $500 number and maybe one well above 1TF with some extras like a HD, blue ray ... (PSX style) for the rich ones under us? What about the possibility of getting the cheap one first and after a year getting an expension box with some TFlops? Why a closed system? I mean it would be not like the PC platform where you need to optimize for every gfx card, you would just need to write for a variable cell architecture with different number of APU's and clock frequency in mind. I thought that's the really big advantage of the cell architecture, besides the possible high flop rating.

Fredi
 
Why would there be three versions each with vastly different performance? That does seem rather pointless.
 
That was just an example, but I don't think it would be pointless. It would be a great way to get all the buyers. The kids with no money will buy the cheap version (and maybe update later), the normal gamers will get the regular version (when they see the gfx of the pro version optimized games, they will update to the pro version or buy the expansion box) and the hardcore gamers will get the pro version (or maybe even more then one).

Of course this would only be possible if the developers only write scalable software cell code that can be distributed and not down to the metal.

Fredi
 
Consoles are all about fixed hardware. Peripherals, even those that do exceptionally well, only sell to a small fraction. If there were 3 different PS3s, software would target the lowest common denominator (because devs are lazy just like the rest of us), sales of the larger PS3 versions would therefore suffer even more because there would be little reason to pay more. Games would still look more or less the same.
 
What about scalable gfx engines? If Sony ships the dev-kits with a scalable middleware gfx engine that dfoes great results on a cheap cell hardware and nearly photo-realistic results on the pro version.

Let us think about that for a moment. If you use tesselated models in combination with normal polygonal models for flat things, such an engine would defenitly be scalable. The textures could be not so easy, but still posible. Anti-Aliasing would be possible. Realtime shadows on every object would be possible. Better draw distance ... what speaks against a scalable 3D engine? Nothing.

Fredi
 
No, console hardware is meant to be fixed platforms that allow the developer to get the most of out it. It would be a burden on developers if there were different versions of the same system with differing levels of performance. Development costs would go up and it could theoretically split the market if a developer decides to just develop for a specific level of performance in a certain demographic. It's just not good for a company to do and it may end up making 3rd parties go to competing consoles that is all one machine with the same performing hardware. In other words, devs might end up running the other way to MS or Nintendo. The last thing we want is for a platform to have differing levels of performance, it would take away from optimisation and that's the last thing we need.
 
Interesting concept, however It just will not happen.

The Broadband Engine, which is the Cell based PS3 MPU that Toshiba and Sony have been developing since May of 2001(However "Cell", the architecture of BE has been in development for much longer) will only have one spec.

We'll see the BE soon enough.
 
Sonic said:
The last thing we want is for a platform to have differing levels of performance, it would take away from optimisation and that's the last thing we need.

Everyone is talking about the "fact" that the graphics will not be that different of all the next gen consoles, so maybe making optimisation impossible would actualy make the development easyer and not harder as you don't have to search for every little bit that has to be moved from one memory block to an other or small timing problems as the 3D engine just does all those thing on it's own depending on how much power is currently available.

Of course developing such a 3D engine would not be easy, but it's sure not impossible.

Fredi
 
hopefully within the next 3 weeks or so, we'll se some type of Cell demo.


Playstation 3 is still at least 2 full years away from release in Japan.

probably 2.5 years for America & Europe.
 
I didn't realize that everybody came to the conclusion that graphics will not be that much different in the next generation. I know quite a few people who think the architectures themselves will be quite different in how they render the graphics. If that happens then the 3d engine had better be very adaptable to different ways of handling graphics to get similar results. Also, why would we want optimisation made impossible? That would just add to a world of lazy coding that plagues so many games these days, something we definitely do not need.

I've said it before in another thread. The concept of having differing levels of performing hardware is interesting, but it would most likely segment the market of the platform as a whole. It may have similar architecture but if there are very different levels of hardware then which one do you cater for? Do you aim for the high end platform then water the graphics down? Do you aim for the low end and gradually increase the graphics as we go? Either way it will increase development time and the results might not be desirable.

I could be entirely wrong in my perception of what Sony's version of CELL is going to be. I am of the understanding that the PS3 will have a very specific configuration for the gaming side and only the gaming side. I know CELL is a scalable architecture in that units can be added and subtracted for varying levels of performance and needs. The CELL going in PS3 will have a configuration for itself and that will be applicable to all PS3 machines. Maybe it would be a baseline with more CELL units being added for different tasks and having an all-in-one PS3 machine, but those additional CELL units would not be used for any of the PS3 games.
 
I could be entirely wrong in my perception of what Sony's version of CELL is going to be.

Let's go further.

I am of the understanding that the PS3 will have a very specific configuration for the gaming side and only the gaming side.

Correct, this is what Figure 6 is, figure 6 compliments the Rambus, SCE, Toshiba contract.

I know CELL is a scalable architecture in that units can be added and subtracted for varying levels of performance and needs.

Yes, Multiple PE's can form a single chip(or just one) and Multiple APU's or even one can form a PE.

The CELL going in PS3 will have a configuration for itself and that will be applicable to all PS3 machines.

I see no objections here.

Maybe it would be a baseline with more CELL units being added for different tasks and having an all-in-one PS3 machine, but those additional CELL units would not be used for any of the PS3 games.

Don't even have to have more PE's inside the PS3 for other functions, the way things are now, you can segment a single or small group of APU's and with their sheer power, you could acomplish just about anything you wanted.

The way PS3's chipset is looking(Figure 6) the thing is designed to handle everything(Remember BE can also handle network packets - "Software Cells").
 
Why not just buy a PC and a expensive video card? Because that´s exactly what happens there. You would lose the optimization that happens on consoles, turning them into a watered down PC gaming plattform.

Besides, one of the things that makes consoles so great is that it´s a fixed plattform.
 
While there will no doubt be different high-end implimentations of Cell, perhaps with several TeraFlops performance, there will only be ONE Playstation3. otherwise, if you had several PS3's with various levels of performance, you end up with several different formats. it doesn't make sense. splitting the user base.


I suppose Sony, or anyone, CAN do this sort of thing. if they chose. NEC actually did it within two years of releasing the PC Engine. the PC Engine came out in 1987, then in 1989, NEC released the PC Engine SuperGrafx with extra RAM and video chips, thus a more powerful PCE that was compatable with the first. I myself would not mind seeing a
'Super Playstation 3' in 2007-2008, but it is unlike Sony to do such a thing. the Playstation Type C which apparently was made, (4x CD-ROM, more VRAM) never got released.
 
McFly said:
What about scalable gfx engines?

No. Graphics doesn't make itself you know... If you have a low-end piece of hardware you could target and just be done with it after that (and hopefully, rake in the cash from sales and lie back sipping an umbrella drink on some beach somewhere), why would you go out of your way to make a version of the game that on some minority of machines gives more or less near photorealistic graphics?

You're not going to earn any more money, especially not when you have to spend oodles more cash in development time making all those graphics to begin with.

... what speaks against a scalable 3D engine? Nothing.

You're not thinking like a dev, you're thinking like a gamer! ;) What speaks against it is time and money and sheer laziness. Why do more work than you have to, especially if only a minority will get to enjoy it.

Oh, a few devs would probably do it, but not many enough for the majority to be able to motivate buying a $500 console.
 
While I don't think Sony will segment the market like this it could make sense in some cases. For example when people buy a TV they have to choose the size and resolution. Doing the same for a console isn't a big stretch. The cheap version could ship with less edram for the frame buffer, etc.

The reasons I think they won't do this is because it adds confusion console buyers aren't used to and add ons like a hard drive and network adapter have never caught on with the mainstream audience. A high end console would effectively be an add on.
 
to some degree, 3Dfx did it with Voodoo2 SLI -

the only way I could see Sony doing something along these lines is if PS3 is built to be scalable by connecting multipul PS3's together. not over the internet (that won't work now, and has been discussed countless times) but simply by linking them together physically. either by stacking them or with a link cable. some games could be built with higher detail , higher framerates, to take advantage of more than one PS3. but the likelihood of this actually happening is in the single digits.
 
I suppose Sony, or anyone, CAN do this sort of thing. if they chose. NEC actually did it within two years of releasing the PC Engine. the PC Engine came out in 1987, then in 1989, NEC released the PC Engine SuperGrafx with extra RAM and video chips, thus a more powerful PCE that was compatable with the first. I myself would not mind seeing a
'Super Playstation 3' in 2007-2008, but it is unlike Sony to do such a thing. the Playstation Type C which apparently was made, (4x CD-ROM, more VRAM) never got released.

The purpose of doing those things are:

1. Instant backwards compatibility
2. Cheaper on R&D costs
3. Faster and easier for developers
4. Increase the lifespan of a dated console

I think archie4oz mentioned that the original PC-FX concept had hardware for backwards compatibility with PC-Engine, but was removed because of costs.

For a new console, it's usually better with regards to performance to have a brand new architecture. However, I think after the release of PS3, the PS4, PS5, PS6, etc. will just have more CELL units instead of going with a completely new architecture since the basic building blocks (general purpose computing units) won't need to be changed.
 
3dcgi said:
While I don't think Sony will segment the market like this it could make sense in some cases. For example when people buy a TV they have to choose the size and resolution. Doing the same for a console isn't a big stretch. The cheap version could ship with less edram for the frame buffer, etc.

What makes you think this will be any kind of savings for the manufacturer? They'd have to develop and verify two CPU/GPUs, manufacture two different models too thereby tying up more fab resources, they'd need two separate assembly lines, keep track of supplies for these two different assembly lines, etc etc. A big part of the economics of mass production would get lost, especially with the more expensive version.

It adds up to a whole lot of bother for the manufacturer, in addition to potential confusion for the buyers, plus extra work for the developer. Ok, for some people it might be a neat idea, but it wouldn't ever work in reality, that's the whole point of a console, to have ONE hardware unit only. ...Which is why Sega created 2 CD units and the 32X I suppose... ;)

(No Lazy8, don't hit me! :LOL:)
 
Well if they are indeed trying to utilize networking capabilities--CELL-enabled devices able to assist each other when directly connected or networked together (as over ethernet)--then it would point a bit more at software that could take advantage of additional added power, wouldn't it? I figured the likelihood of actual "enhanced" consoles--or multiples that can chain--would depend on how well they could work that out on the software end, and what advantage it would actually offer.

It seems a bit more likely that we could see that on CELL systems than other ones--due to their nature and stated desires for the chips--but it will certainly depend on what can be realized at launch, and how complex it would be programming-wise to push it further.
 
Back
Top