How would you react when the PSX3 spec reads 240~480 GFLOPS?

I think Pana should do a write up on how Cell differs from normal PC stuffs. You know, the thing that makes it "cool" and "interesting". KEep it simple english though and then maybe we can have a sticky for it.
 
What, you only trust it coming from Pana? If you don't know yourself why it is different, btw, then how can you be so critical of it in the first place? This raises all sorts of questions about you, FWIW. If you want to know more, why don't you go over to Ars and read a blackpaper on it or something?

(This is not to say a piece of writing from Pana wouldn't be interesting. Just throwing in more suggestions, and raising eyebrows at Chap's peculiar request.)
 
akira888 said:
Argh... ok....
It's pretty clear from my post that what I was referring to as "too traditional" was the GS and only the GS. I believe my point is fairly obvious, seeing as how that chip was far more basic than the CLX2 despite the fact that DC was released 16 months prior to the PS2.

You do realize that there is an entire ideological body outside the realms of the nVidia <-> ATI spearheaded paradigm. I know, as my post stated, that this place is becoming nothing more than a place where bullshit arguing-points take precedence over talk of fundimentals, but in the spirit of this discussion try to think outside the box.

There are other ways to render a scene, by utilizing hardware that's not dominated by fixed functionality and arbitrary restrictions on what's an implimented "feature" and what's not. These competing ideologies can exist in the console realm (thank God) and the GS - which I was referring to - falls under this.

It's so easy for people to just look at the GS as this "[VoodooGraphics]*[16]" or whatever you called it, without seeing what SCEI's engineers are envisioning. I think the term 'Renderman in silico' was once used, and it's more or less what I feel they're striving for. They're giving you this tremendous amount of front-end comutational resources that are connected with high-bandwith busses and small, but fast, caches and attaching a basic and fast as shit rasterizer that exists only to do highly iternative tasks. I happen to think this ideology is pretty damn neat, and kinda elegent in design.

But, instead of thinking about the design in toto and outside of the PC paradigm, people such as yourself can't rise above the petty comperasons to the PC (which is so diametrically opposed) and calling it primative or basic or "too traditional." Something which can't be farther than the truth. Diefendorff's publications talk in length of this ideology, they keep popping up too.

I figured my (limited) point would also be obvious especially after my positive comment about VU0 and VU1.

So, you think by posting a positive that us "Sony-ites" won't jump on you for an ignorent comment? Believe it or not, I don't care how much you praise it or what architecture you're putting down - if you're wrong, I'll say so.

In a network latency intolerant environment such as real time gaming, I fail to see how distributed computing would be in any way useable. Think about your "ping" on your network at home. If you can connect with even a latency of around 32ms, that's pretty good on DSL. Yet that represents 2 whole frames to get your data across, and then you need 2 more to retrieve it.

It's almost easier to just tell people they're right than explain it (as this has been discussed too many times here). If you look at this objectivly, you'll see that Cell computing will initially be the backbone of an inter-Sony Corp digital content fabric (as in the SCE patents I posted) that links together Sony's products and the product to their digital media in a pervasive manner. Which has several highly adventagous effects for the companies finacials and such.

And with this will come Cell Computing in the Household. Which is kinda cool if you think about the potential for a PDA or TV to not only control and fetch digital content, but share processing power to do tasks that the PDA or TV alone couldn't. Could definatly screw over Intel's Xscale (and Moore's Law, something I said around 2 years ago) when you think about it from the standpoint of enabling this level of 'virtual' computing power in a PDA that could never fit within the physical bounds of an IC. And for the vast majority of tasks, the latency problem is nonexistent and totally maskable.

From there it could evolve into servers and computing on demand in a more utility/corperate manner. Just as IBM said they'll dynamically farm out computational resources to corperate entities that need it at t time, this is something Cell is capable of doing well. Also has the potential to be profitable.

And eventually, one day, we could see Cell computing over the internet itself in tasks in which the latency can be masked and aren't RT or as time sensitive as rendering 60fps. Of which, this can be utilized to create what SCE called, "World Simulation" (I think that's what they called it, Faf might remember). Which could, conservativly, be the same server-client dynamic that exists in today's multiplayer games - just with an order or two more computing behind it. Or it could be much, much neater.

Regardless of the manifestation that this Cell takes, what is important to remember is that the concept of Cell/GRID computing itself is amorphous. It ushers in the ability to drastically raise the computational baseline that a singular device can deliver and in the process nearly negates Moore's Law as the consumer sees it. It can be used in any number of potentialities: the ones we've concieved of, and the ones we can't even imagine.

While improvements in the internet topology might reduce this, there is a lower bound dictated not only by the processing requirements for network transmission but also by Einstein's Constant! (speed of light).

HA! There is something about the way this is worded that makes me laugh, not in a bad way, just funny. Ohh, and what if Joao Magueijo works for Sony? So much for that constant. ;)
 
archie4oz said:
I'm beginning to regret ever saying that... :? Besides it was in context to the GScube not the PS2...

Nice, you reminded me. I was going to comment on the GSCube16 as an extention of the ideal after introducing it with this comment. I wasn't really talking exclusivly about the PS2 though, I was trying to infer that Sony's not exactly following the PC's DX/OGL roadmap that he's trying to compare it against. Should I go back or do you think it's understood?
 
I think Pana should do a write up on how Cell differs from normal PC stuffs. You know, the thing that makes it "cool" and "interesting". KEep it simple english though and then maybe we can have a sticky for it.

There are over 200 pages on it on this board, go search back. If you can't understand the "techie" stuff it's not anyones fault but yours.

Here's the answear to your question.

1. Cell isn't based on the X86 architecture, hence it can't be used in a windows PC.

2. Sony, IBM, and Toshiba are atempting to shove 32APU's each doing 32GFLOPS and 32OPS each onto a single die, and Cell's transistors will be 65 nm big each. That's why Cell is cool and interesting.
 
I have to say that all pc tech is cool and interesting and really non of it intrests me more than the other. I'm looking foward to the ps3 as much as the xbox2. They will both have new tech that we haven't seen yet.
 
Squeak and Paul:

Thank you for your replies. One thing I think we need to consider though is that there is a difference between on-line distributed calculations on one hand and mere data transmission. To do physics over a computational network, for example, one would have send a tremendous amount of data (actually both instructions and data) over our network in order perform the calculation. On the other hand, performing AI and physics on your home machine all you send over the network is the final state, which is comparitively tiny.

To make an analogy from the OOPP, imagine the difference between transmitting the data of an entire object (both procedures and variables) and merely sending the return value.
 
I wasn't the one who mentioned Physics and AI being calcuated distributedly.

But yea, AI and Physics in online games aren't calculated distributedly, the actions of other players are sent to you and your machine does everything. This is why your frame rate will drop if you play with more players online.
 
I was going to comment that usually physics and AI for an individual player are calculated locally on the user's machine and position and status is what is really sent to the server hosting a multiplayer game. What may be possible in a global Cell network is that the events (climatic, astronomical, meterological, or environmental, for example) or natural cycles in a virtual, physical world might be calculated in a distributed manner. Something like that would not necessarily rely on 60 fps updating or comm link latency. Imagine you are playing in a virtual world and you witness the unfolding event of an incidental strike of a giant asteroid onto the planet (perhaps even a realtime rendered cutscene?)...stuff like that, I imagine... Running with that thought, what if realtime game engine cutscenes could have a scaleable component to them? If you are running a single PS3, you get a cutscene appropriate for a single machine (which shouldn't be too shabby at all given the claimed capabilities). If you happen to have an array of Cell devices on-tap, the cutscene engine scales up (as appropriate) with even more elaborate effects. ...or maybe you get more waving grass and plant-life in response to wind calculations while playing your game? I may not be baked on some chiba to come up with that at this very moment, but I assure you I am thoroughly plowed on whiskey right now. :D

Maybe the trick behind all of this is not assuming a 1 TFLOP "machine" will be able to do "just anything" that a traditional 1 TFLOP machine could do. It's certain instances where 1 TFLOP would come to bear. It may not be as universal an accomplishment as was originally thought, but still, 1 TFLOP of work is actually occuring under specific circumstances- things that would never be possible otherwise. Certainly nothing to scoff at. Let's say for example, each PS3 does turn out to be a TFLOP. Under special circumstances, a paltry 1000 PS3 userbase connected over high-speed internet could host a 1 PFLOP operation. :oops: Now we know the userbase would easily be in excess of a million... Imagine the possibilities! Even if the PS3 turns out to be a mere 1 GFLOP machine (which would be an absurd premise shooting at the low side), you still could expect a potential 1 PFLOP of resources (for specific circumstances) with a userbase of 1 million high-bandwidth connected PS3's.

Even if you assume a paltry 10% effectiveness due to network logistics, that's still 100 TFLOPs. That's an a$$load of power, no matter how you look at it. You could then reason, a userbase of 10 million is entirely possible, and then you are right back to 1 PFLOP. Imagine how fast a gene-splicing packet could run with resources like that (if they were to participate in the Cell)?! Hell, we might have cancer, AIDS, and eternal life figured out before the PS3 reaches EOL status (when PS4 steps in).
 
Vince said:
It's almost easier to just tell people they're right than explain it (as this has been discussed too many times here). If you look at this objectivly, you'll see that Cell computing will initially be the backbone of an inter-Sony Corp digital content fabric (as in the SCE patents I posted) that links together Sony's products and the product to their digital media in a pervasive manner. Which has several highly adventagous effects for the companies finacials and such.

Show me the money ...

And with this will come Cell Computing in the Household. Which is kinda cool if you think about the potential for a PDA or TV to not only control and fetch digital content, but share processing power to do tasks that the PDA or TV alone couldn't.

What concretely would that be apart from playing games? Which doesnt really count since that is what the console is for anyway, a PDA being able to run its gamecode on the console would be kinda cool ... but given the difference in computing power, and limited bandwith over wireless, sharing computing power isnt a very accurate description of what would be happening. The PDA would be a dumb console to the console, for that you dont really need Cell on the PDA.

Could definatly screw over Intel's Xscale (and Moore's Law, something I said around 2 years ago) when you think about it from the standpoint of enabling this level of 'virtual' computing power in a PDA that could never fit within the physical bounds of an IC.

I dont think Intel is worried, they have plenty of years to make money off Xscale in which low power mobile applications will need local processing power. The future isnt now, or anytime soon.

And for the vast majority of tasks, the latency problem is nonexistent and totally maskable.

If you really want to share computing power to build an aggregate computer the bandwith*latency on the internet will continue to be insurmountable problem for many years. Only simple batchprocessing with a high number of operations per bit can be effectively distributed over an internet Grid.

Timesharing of computing power on nearby Cell based mainframes by dumb consoles ala the PDA example is possible ... but you dont really need Cell in anything but the mainframe, it is more a protocol issue (X-windows/VNC/etc) than an architectural issue.

From there it could evolve into servers and computing on demand in a more utility/corperate manner. Just as IBM said they'll dynamically farm out computational resources to corperate entities that need it at t time, this is something Cell is capable of doing well. Also has the potential to be profitable.

It might happen, but I doubt it will happen with Cell. IBM does not show quite the same faith in Cell's ability to be all things to everyone. No architecture fits all, never will ...
 
It might happen, but I doubt it will happen with Cell. IBM does not show quite the same faith in Cell's ability to be all things to everyone. No architecture fits all, never will ...

Selling computing power on demand and highly dense clustering of CELL machines and of CELL ICs is something CELL was designed for since its early days.

A brick can build a small house wall or a giant building and that is the philosophy behind the PEs' design and the CELL ISA: the more PEs you add, the more the over-all system ( bandwidth permitting ) will have to utilize.

Never say never Mfa ;)

About a CELL PDA advantage in a home CELL computing set-up ( btw with 55 Mbps of 802.11g we have quite a bit of bandwidth ;) ), well it is true that you can use abstraction layers on the machine that would allow it to communicate and share data and even distributing the computing load.

Those software layers would not come for free and they would waste cycles and efficiency: more of the processing power would be need to keep up the communications than what a CELL device would need.

CELL PDA chips would bring more bang for the same transistor budget as they would not need that big of a software abstactrion layer.

On the application side it would be simplier for developers to get different CELL based devices to share data and inter-operate: the capabilit of the Apulets to freely travel over networks and the standard APU ISA would be welcome features to application programmers.

The idea of your home appliances sharing the same basic architecture and being inter-connected intelligently seems quite appealing.

I think that inside the home there is enough speed to share computing power: some jobs won't be huge, sometimes it will be that few extra Apulet that the first available CELL device will pick up and sometimes the main server will do most of the jobs and will spill what does not fit to the rest of the CELL computing partners.
 
Show me the money ...

Take that $8 Billions worth of ICs externally bought and say that putting CELL in most if not all Sony corp. devices we can cut that to $5 Billions ( or less ) that is saving Sony corp. at least $3 Billions a year.
 
randycat99 said:
What, you only trust it coming from Pana? If you don't know yourself why it is different, btw, then how can you be so critical of it in the first place? This raises all sorts of questions about you, FWIW. If you want to know more, why don't you go over to Ars and read a blackpaper on it or something?

(This is not to say a piece of writing from Pana wouldn't be interesting. Just throwing in more suggestions, and raising eyebrows at Chap's peculiar request.)

What the heck are you rambling about this time.. :?
The biggest reason i asked off pana, is because he seems to be the only nice guy around that bothers to do long and nice Cell posts, AND he doesnt do those needless holier than thou jabs at other posters.

If anyone has to talk Cell nicely, it be pana be the best candidate.
 
ANYWAY, from posts in this topic so far, its seems that why Cell is hot around, is because people are expecting it to be the next big consumer connected-CPU. Is that it?

Sounds really optimistic to me.
 
ANYWAY, from posts in this topic so far, its seems that why Cell is hot around, is because people are expecting it to be the next big consumer connected-CPU. Is that it?

not here, but some have optmism in Sony's plans for CELL in the future i.e not next gen.

if you want the details of the 'technical' merits of CELL them run a search and look through some of Pana's posts.

Sounds really optimistic to me.

it is.
 
Pana old posts were quite a handful for poor chappy. Hence i be asking of him for a simpler and cleaner one. Just whats the difference be the Cell over conventional PC hardware. What does Cell provide over future PC systems in 3D games that makes it so graceful? How is 3D render over PS3 that makes it different from Nvidia/ATi methods? Is it good?

BUT the main hohum be, Sony is going to pack tons of flops into one 65nm chip, that would crush all other consumer cpus? So we are to expect a small but ultra powerful tiny cpu? Thats the beef with Cell?

Many questions, so many i still have to ask? So tired.... :?
 
Vince said:
You do realize that there is an entire ideological body outside the realms of the nVidia <-> ATI spearheaded paradigm. I know, as my post stated, that this place is becoming nothing more than a place where bullshit arguing-points take precedence over talk of fundimentals, but in the spirit of this discussion try to think outside the box.

There are other ways to render a scene, by utilizing hardware that's not dominated by fixed functionality and arbitrary restrictions on what's an implimented "feature" and what's not. These competing ideologies can exist in the console realm (thank God) and the GS - which I was referring to - falls under this.

It's so easy for people to just look at the GS as this "[VoodooGraphics]*[16]" or whatever you called it, without seeing what SCEI's engineers are envisioning. I think the term 'Renderman in silico' was once used, and it's more or less what I feel they're striving for. They're giving you this tremendous amount of front-end comutational resources that are connected with high-bandwith busses and small, but fast, caches and attaching a basic and fast as shit rasterizer that exists only to do highly iternative tasks. I happen to think this ideology is pretty damn neat, and kinda elegent in design.

But, instead of thinking about the design in toto and outside of the PC paradigm, people such as yourself can't rise above the petty comperasons to the PC (which is so diametrically opposed) and calling it primative or basic or "too traditional." Something which can't be farther than the truth. Diefendorff's publications talk in length of this ideology, they keep popping up too.

In my defense, I wasn't arguing the admittedly foolish point that the "Nvidia-Ati" paradigm was somehow sacrosanct but rather merely critiqued Sony for not fleshing out their rasterizer fully. Such functionality as texture compression and basic bump mapping are useful (now neccessary) no matter what base architecture the system relies on. That a platform has a "different approach" is not a blanket excuse for mistakes.

Vince said:
So, you think by posting a positive that us "Sony-ites" won't jump on you for an ignorent comment? Believe it or not, I don't care how much you praise it or what architecture you're putting down - if you're wrong, I'll say so.

To be honest I didn't care if I was critiqued or not, to be honest I actually expect and maybe even wish for it. My point in reiterating my earlier comment was merely to prove to all concerned that I'm not a rabid anti-Sony zealot like a good chunk of this board. And my original comment was just my honest opinion, nothing more, nothing less.

Vince said:
...for the vast majority of tasks, the latency problem is nonexistent and totally maskable.

I agree. Most computational tasks aren't that latency sensitive. Gaming however by its nature is. Photoshop is not required to process their various filters in 16.66ms, however consoles have exactly that long to process and raster one frame. That still leaves the nasty problem of raw bandwidth being miniscule in terms of the computational internal bus I/O though.

Vince said:
akira888 said:
While improvements in the internet topology might reduce this, there is a lower bound dictated not only by the processing requirements for network transmission but also by Einstein's Constant! (speed of light).

HA! There is something about the way this is worded that makes me laugh, not in a bad way, just funny. Ohh, and what if Joao Magueijo works for Sony? So much for that constant. ;)

Even though IIRC it is possible for light to travel faster than "C" it has not even been hypothesised that causality can travel faster than "C." While neither of my degrees were in physics (thank god), this is what I understood.
 
Back
Top