akira888 said:
Argh... ok....
It's pretty clear from my post that what I was referring to as "too traditional" was the GS and only the GS. I believe my point is fairly obvious, seeing as how that chip was far more basic than the CLX2 despite the fact that DC was released 16 months prior to the PS2.
You do realize that there is an entire ideological body outside the realms of the nVidia <-> ATI spearheaded paradigm. I know, as my post stated, that this place is becoming nothing more than a place where bullshit arguing-points take precedence over talk of fundimentals, but in the spirit of this discussion try to think outside the box.
There are other ways to render a scene, by utilizing hardware that's not dominated by fixed functionality and arbitrary restrictions on what's an implimented "feature" and what's not. These competing ideologies can exist in the console realm (thank God) and the GS - which I was referring to - falls under this.
It's so easy for people to just look at the GS as this "[VoodooGraphics]*[16]" or whatever you called it, without seeing what SCEI's engineers are envisioning. I think the term 'Renderman in silico' was once used, and it's more or less what I feel they're striving for. They're giving you this tremendous amount of front-end comutational resources that are connected with high-bandwith busses and small, but fast, caches and attaching a basic and fast as shit rasterizer that exists only to do highly iternative tasks. I happen to think this ideology is pretty damn neat, and kinda elegent in design.
But, instead of thinking about the design in toto and outside of the PC paradigm, people such as yourself can't rise above the petty comperasons to the PC (which is so diametrically opposed) and calling it primative or basic or "too traditional." Something which can't be farther than the truth. Diefendorff's publications talk in length of this ideology, they keep popping up too.
I figured my (limited) point would also be obvious especially after my positive comment about VU0 and VU1.
So, you think by posting a positive that us "Sony-ites" won't jump on you for an ignorent comment? Believe it or not, I don't care how much you praise it or what architecture you're putting down - if you're wrong, I'll say so.
In a network latency intolerant environment such as real time gaming, I fail to see how distributed computing would be in any way useable. Think about your "ping" on your network at home. If you can connect with even a latency of around 32ms, that's pretty good on DSL. Yet that represents 2 whole frames to get your data across, and then you need 2 more to retrieve it.
It's almost easier to just tell people they're right than explain it (as this has been discussed too many times here). If you look at this objectivly, you'll see that Cell computing will initially be the backbone of an inter-Sony Corp digital content fabric (as in the SCE patents I posted) that links together Sony's products and the product to their digital media in a pervasive manner. Which has several highly adventagous effects for the companies finacials and such.
And with this will come Cell Computing in the Household. Which is kinda cool if you think about the potential for a PDA or TV to not only control and fetch digital content, but share processing power to do tasks that the PDA or TV alone couldn't. Could definatly screw over Intel's Xscale (and Moore's Law, something I said around 2 years ago) when you think about it from the standpoint of enabling this level of 'virtual' computing power in a PDA that could never fit within the physical bounds of an IC. And for the vast majority of tasks, the latency problem is nonexistent and totally maskable.
From there it could evolve into servers and computing on demand in a more utility/corperate manner. Just as IBM said they'll dynamically farm out computational resources to corperate entities that need it at
t time, this is something Cell is capable of doing well. Also has the potential to be profitable.
And eventually, one day, we could see Cell computing over the internet itself in tasks in which the latency can be masked and aren't RT or as time sensitive as rendering 60fps. Of which, this can be utilized to create what SCE called,
"World Simulation" (I think that's what they called it, Faf might remember). Which could, conservativly, be the same server-client dynamic that exists in today's multiplayer games - just with an order or two more computing behind it. Or it could be much, much neater.
Regardless of the manifestation that this Cell takes, what is important to remember is that the concept of Cell/GRID computing itself is amorphous. It ushers in the ability to drastically raise the computational baseline that a singular device can deliver and in the process nearly negates Moore's Law as the consumer sees it. It can be used in any number of potentialities: the ones we've concieved of, and the ones we can't even imagine.
While improvements in the internet topology might reduce this, there is a lower bound dictated not only by the processing requirements for network transmission but also by Einstein's Constant! (speed of light).
HA! There is something about the way this is worded that makes me laugh, not in a bad way, just funny. Ohh, and what if Joao Magueijo works for Sony? So much for that constant.