Kutaragi talks CELL technology @ TGS2002

Vince said:
How 'bout the two of you pull the cocks out of your collective asses, stick them in your mouths and shut the fuck up.

Your both useless to any form of technical descussion and I (and I know I'm not alone) are getting sick of this extremely biased bullshit. Your no better than a person like Derek Smart who just caused the BS over in the PC forum. Keep it up...

[...]

I thought you learned after the BS from a year or two ago Johnney how stupid you sound... get ready for another rude-awakining.

Speaking of Derek Smart's writing style. ;-) Come on Vince, no one in this thread was cussing and attacking ad-hominem until you started.

I think duffer, antlers4, MfA, and Gubbi have brought up some very real feasibility concerns with a wide-area distributed Grid architecture (as you propose) for running near-hard real time tasks like world-simulation and physics for a game engine.

I would be very interested in hearing your response to these concerns in terms of concrete engineering solutions.
 
I think duffer, antlers4, MfA, and Gubbi have brought up some very real feasibility concerns with a wide-area distributed Grid architecture (as you propose) for running near-hard real time tasks like world-simulation and physics for a game engine.

Absolutly, which is why: a) I wasn't adressing them, I quoted who I was adressing, and b) I want this to remain a discussion and not a simple "Sony sucks, they just use PR to cover their inferiorities" thread filled with such comments.

I'm just sick of comments like this out of the two of them...

I would be very interested in hearing your response to these concerns in terms of concrete engineering solutions.

I don't know it all, I don't work for SCEI. But, I have seen Okomoto's GDC presentation and what he stated and relayed that here. I'm not pro or con at this point - only open minded and wanting a discussion that follows in the same light.

The latency is an issue, but it is that much worse than the inherient latency of a Region Based Deffered Renderer, or any non-IMR scheme for that matter. Or worse than AFR's latency or Triple Buffering (? - My memory of this stuff is fading).
 
Physics must be computed per frame, it has similar latency requirements as rasterization

Physics is computed on time base isn't ? Or do developer still doing it per frame ? If you are doing it per frame, won't it be abit too coarse for today's frame rate ?
 
The latency is an issue, but it is that much worse than the inherient latency of a Region Based Deffered Renderer, or any non-IMR scheme for that matter. Or worse than AFR's latency or Triple Buffering (? - My memory of this stuff is fading).

The main problem is that network lag is both unpredictable and potentially unbounded. If you can guarantee the latency is always no more than say, 2 frames, it is possible to work around the lag, furthermore, humans perception tends to compensate against a consistent lag fairly easily.

But in a distributed app where one packet takes 10 ms to traverse the network, but the next packet takes 110 ms to traverse the network is just a plain nightmare if you depend on being able to execute a main loop in 30 ms.

That's not including the scheduling delay that might happen should the node you pick to run your simulation currently be busy running some other task.

Just look at how bad streaming media systems are over the Internet, even with large buffers, static content, and loads of error correction and stream scalability, and that's just a "simple" stream of static video content from one location to another.

Writing a system that tries to be real time in these conditions is an exceedingly difficult task.
 
V3 said:
Physics must be computed per frame, it has similar latency requirements as rasterization

Physics is computed on time base isn't ? Or do developer still doing it per frame ? If you are doing it per frame, won't it be abit too coarse for today's frame rate ?

Your physics calculations may be time based, but you still need to sample them at least once per frame, otherwise you don't know what to render where (where is object X now?).

My understanding is a normal main game loop looks something like this:

1. initialize world state
2. get user input.
3. compute AI state based on current world state.
4. apply input and AI state to the world.
5. simulate world.
6. render player's view of world based on results of simulation.
7. goto 2.

I think frame based engines are still very popular on consoles -- you have a fixed platform, so you can tune the heck out of it.

For example, I think Halo is a frame based engine (which is why you can't system link US and European version, for example - PAL 50 hz vs NTSC 60 hz causes the network code to diverge almost immediately).

http://forums.bungie.org/halo/archive5.pl?read=100138
 
Apart from it being possible, which I dont think it is, there has to be a good reason to move the processing away from the console ... oversubscription is not an option, so you dont save a whole lot of money if you just use the distributed processing power for games.
 
Your physics calculations may be time based, but you still need to sample them at least once per frame

sampling your time base physcis calculation per frame and calculating your physcis on frame base is still a different thing.
 
V3 said:
Your physics calculations may be time based, but you still need to sample them at least once per frame

sampling your time base physcis calculation per frame and calculating your physcis on frame base is still a different thing.

Well, yes.

My point is you still have to do some calculations every frame to determine what the physics state of the world is. You need new position information for all the objects, particles, etc. in your game every frame. Thus the need to have something come back from your physics engine every 30 ms.

If you can't guarantee this, it is just as bad as missing a frame render, since you'll only have stale state to render.

And that's why I think doing physics over a Grid will be hard.

Just as a random example, suppose I tell some Grid node to start calculating how this mesh simulating a piece of cloth will behave given a certain amount of wind, I will need results back from it every 30 ms, otherwise, my mesh will not update every frame.

If the player's character collides with the cloth, I need to send that node the collision information, have it compute the results of that collision and send it back to me, again, in less than 30 ms.

If the network ever fails to deliver a packet on time, strange results will probably occur. The cloth might interpenetrate the player character. The cloth might stop animating. The cloth might animate with an odd delay. Etc.
 
The latency is an issue, but it is that much worse than the inherient latency of a Region Based Deffered Renderer, or any non-IMR scheme for that matter. Or worse than AFR's latency or Triple Buffering

1. There are many solutions to decrease the delay in a TBR, one of them being a higher clock.
2. The delay in a TBR isn't that high to begin with.

The pipe to your PS2 is the bottleneck. Fiber optics to the home isn't happening anytime soon.
 
Has anyone thought of the 'grid' computing as inside the PS3 hardware itself rather than over a network? :oops:
 
archie4oz said:
Hype? :-?

Good god, it's just friggen conference presentation! :devilish: But I guess everybody who delivers a presentation at a conference or expo *must* be a hype machine right? :rolleyes: The damn thing had practically nothing to do with the PS3, and was primarily a business recap and presentation on online business strategy! JHC...! :rolleyes:

Settle down.

The hype comment was regarding the initial statement made by Sony/IBM that CELL would scale to 1TFLOPS, which was picked up by the PS crowd as the level of performance the next (or next+1) PS would achieve. This was ominously reminiscent of the pre PS2 launch. And while Sony never stated (some gaming rag did) that the PS2 would deliver CGI level graphics, just like they didn't state that the next generation (or next+1) PS would achieve 1TFLOPS, they certainly benefitted from the hype (lots of media attention to the PS2).

I actually found the above presentation refreshingly down to earth and devoid of marketing hyperbole, especially since they indicated the true level of performance for a (single chip) CELL based design.

Cheers
Gubbi
 
Cell may or may not work. It may rock the world or it may be a flop, most likely somewhere in between, but I don't think that's the point. To me, cell is an attempt at something grand, not only the change computing but break Moore's law, that IMHO deserves recognition. While companies like Intel and AMD will happily shrink their die, advance on lithography to humm along pin-picking each other, as if waiting for the day when Moore's law won't sustain them anymore, Sony has taken a much more grand approach, naive or unrealistic you may view it, the very fact that they have started this not JUST from a profit point of view but from a engineering one as well just earns my respect. Going foward to *try* leap Moore's Law instead of trying to sustain it or become contained by it is also such a healthier approach no? What ever happened to human ingenunity?

It's really funny really; everything that Ken Kutaragi said in the interview with Nikkei Electronics has come true. (well not *everything*). He was talking about how there are far too many engineers (who call themselves engieers because either they have a degree and read some publication like the Nikkei E, or in our case, Beyond3D) just sit back and criticize on a given proposal without - heck, I just found the quote:

"There are too many watchers who criticize other people's technologies and there are too little who really are giving everything a good thought. Those mere criticizers claim they are engineers, but alas, they are salaried workers. They think they understand their stints by reading NIKKEI ELECTRONICS, for instance. It is nothing if the magazine knows more than the field in which you specify."

It's more than true, it's blatantly obvious. Although many of you bring up very valid points, it resides on too many assumptions. None of us here know anything about CELL in any detail. It seems ludicrous to me that the very team at Sony, IBM and Toshiba designing this has not thought of complications such as latency, programmable vs. hardwired etc. What I mean to say is, while your concerns are valid, you are hardly the first to think of such things; they were considered at the same time the concept for CELL was born. If we can agree on that, then it would be safe to say they would not carry forth such a costly and risky venture without thinkng through the exactly details, let alone such blatantly obvious things as latency.

I have to admitt I like Sony's engineering department, especially A/V and SCEI. I can't say the same about Sony Music or their customer management in the U.S. But Sony engineering, I truely am a big fan of. (Better admitt it first than being called one later :D) The reason is quite simple - they are visionaries. They gave birth to portable music, flat screen TVs, etc. Some fail, some prosper, but what matters is they have vision and they change the way many people live their lives in ways beyond incrementality (word?). This kind of healthy mentality, instead of sitting on one's fat ass surfing some forum and typing one lined negative sentiment is what makes me smile.
 
Since I can't read Japanese but can see the pictures, can anyone explain to me how this would be any different to the Transputer philosophy?
 
Pryde, that would be a fair accusation if everyone was saying what Sony was doing was sure to fail ... that is something else than saying that someone else's interpretation of what they are going to do would not be possible, which is what usually happens here.
 
Well, with the fiasco that is the PS2 GS I have less confidence in Sony engineering. What were they thinking?

The PS2 is a poor design for a year 2000 machine. This isn't to say that a similar but fully realized version of this design couldn't work for a PS3. Nevertheless, it would be nice if they got their current ideas to work before tackling new ground.

As I said before I'm keeping an open mind, but Sony deserves the same scrutiny that anyone else gets. When mainstream news media (and some around here) are making claims of 1000x performance and breaking Moore's Law, then it's time for a little skepticism.
 
Johnny maybe you would like to go here

And tell us what would be a better design for a console releasing in March 2000.
 
Well, with the fiasco that is the PS2 GS I have less confidence in Sony engineering.

There are games that clearly show what PS2 is capable of, and I wouldn't call that a failure by any stretch of imagination. It took some time for programmers to realize the ways they should work with the hardware but thinking different != failure. PS2 is s very good hardware for the time it launched. Definitely much better than what you could assemble for a reasonable price from Intel+nVidia parts.
 
Back
Top