How was Ps2 thought to work ???

Re: ...

DeadmeatGA said:
I don't see why this would be, since both VUs are supposed to be identical. As long as you keep your VU program size and vertex list size to 4 KB, you can use either of them.
Why are you supposing both VUs are the same? Where ever you got that infomation from was wrong.

VU0 and VU1 share basic architecture but have quite a few differences.

VU1:
More RAM
Extra arithemetic unit (EFU)
Special Pipe to GS
Extra instructions for EFU and GS access

VU0:
Extra instruction set (Macro mode)
Coprocessor logic (fast register links etc)

Also the R5900 and GS both have extra logic to handle the connections with each VU (i.e. GS has an extra register mode explicitly to help VU1)

I'm no hardware engineer but I doubt you could rip out half of VU0 (Macro mode) and replace it with a high bandwidth pipe to GS, extra RAM and an extra arithmetic unit quickly. At the same time altering the GS, DMAC and memory subsystem to accomodate as well.
 
Re: ...

DeadmeatGA said:
I have yet to see a PSX2 title doing 10~20 million polys/s.

Burnout released in Oct '01 did 15 million polys/s. Source: SCEE Technical Website & Technical Director of Criterion Adam Billyard.

It sure pissed off Shinji Mikami because he had rejected using Renderware for an inhouse solution because it would enable them to create a more powerful game engine!

With Burnout 2, Criterion reduced the number of polys and concentrated on the Image Quality.
 
I made this clear in the previous post - VRam is the only memory PVRDC can see. If the texture isn't in there when it starts to render, it doesn't exist as far as the chip is concerned.

Yeah you did make it clear and I didn't actually argue with what you said. What I was saying is that if PVRDC had virtual texturing it could complete its HSR and then only load in the visible textures. So I'm saying that the omission of virtual texturing was quite a big missed opportunity considering that PVRDC knows what is and isn't visible before textures are loaded. If it had control over texture upload it could have saved loads (66% in a normal scene) of texture space and bandwidth.

Erhm... I seem to remember trilinear on PVRDC required triangles to be setup twice. I could be wrong, but doesn't this directly contradict the notion of rasterizer doing more then one texture per pass?

No, PVRDC needed 2 cycles for trilinear AFAIR, not two passes.
 
how was ps2 designed to work...



i think we should all go see the effects used in the tech demo shown on that other thread to see what ps2's vision was.
designs like these are what were supposed to make ps2 stand out over the competition, because the polygon-centered ideals just look different from the texture-centered ideals in my opinion.
that demo has NO textures (if i looked at it right) but it still look better than most other things i've seen rendered in real time on other types of architectures, be it PC or other consoles.

that's it.

if people don't like the fact that PS2 is a polygon-centered architecture, i think they should:
1) get over it after, what, 3 years?
2) just get other consoles-PC and stop bitching

the fact that PS2 handles nice-enough textures anyway only makes this thread even more useless.
 
I always had the doubt of what they really wanted to do...I always related the synthesis word with a world of equations. I mean to render it through equations and use texture based equations. Just like in a Ray Tracing engine where you define the ray-intersection method for the object and you define textures through equations too...
However, after reading the 4 pages...it has been clear to me that the philosophy applied in the real world was totally different.
PS2 games are not based on R-T but on traditional rending (vertex coordinates + textures+ texture coordinates on faces). When you have calculated these for the frame to render then you send them to the GS and it paints them. My question was if Sony wanted this philosophy to be applied or they had in mind another one.

BTW, I've heard that World Rally Car from SCEE uses synthetized textures instead files...Is this true ?
 
ShinHoshi said:
I always had the doubt of what they really wanted to do...I always related the synthesis word with a world of equations. I mean to render it through equations and use texture based equations. Just like in a Ray Tracing engine where you define the ray-intersection method for the object and you define textures through equations too...
However, after reading the 4 pages...it has been clear to me that the philosophy applied in the real world was totally different.
PS2 games are not based on R-T but on traditional rending (vertex coordinates + textures+ texture coordinates on faces). When you have calculated these for the frame to render then you send them to the GS and it paints them. My question was if Sony wanted this philosophy to be applied or they had in mind another one.

BTW, I've heard that World Rally Car from SCEE uses synthetized textures instead files...Is this true ?



oh u mean PROCEDURAL rendering!!!!!

gosh it took me a while to get the meaning of ur post.......

procedural rendering makes things easier in some cases but i dont think we're at the stage where procedural rendering can take over traditional rendering techniques, IF that ever actually happens.

Procedural rendering was discussed not long ago on another thread.

basically the philosophy behind the EE was "making a chip that can calculate the biggest amount of complex mathematical equations in the shortest amount of time". and that helps in procedural related techniques where sheer mathematical performance wins over hardwired features and amount of memory.

Procedural rendering HELPS when rendering environments, vegetation, water equations and pretty much eveything that hasn't got "human drawn" written on it... u get me?
 
Have anybody of you seen the Assembly 2002 seminar on PS2 Linux? Well, basically it is a 34 minute intro on how the stuff works explained by a few representatives and it goes into some details. HOWEVER there are marketing in it, but they try to hide it as well as they can :rolleyes:.

Now, the fun part of it, 25 minutes into it they demo a handful 16k demos running solely on the vu1. Its interesting, and really shows it off. Procedural stuff, so to say. Includes fractals, physics, and, you guessed it, raytracing. :cool: - Its only 5 minutes with effects but well worth it.

download from scene.org (34:31, wmv, 124MB)
 
phed said:
Now, the fun part of it, 25 minutes into it they demo a handful 16k demos running solely on the vu1. Its interesting, and really shows it off. Procedural stuff, so to say. Includes fractals, physics, and, you guessed it, raytracing. :cool: - Its only 5 minutes with effects but well worth it.

download from scene.org (34:31, wmv, 124MB)

Had this vid for quite some time already.. impressive indeed. Got to love the grass demo and the puppet one. If I am not mistaken, the source of the demos should be available somewhere on the ps2linux community site...
 
If I am not mistaken, the source of the demos should be available somewhere on the ps2linux community site...

Both source and compiled binaries. I still like last year's contest winner the best though...
 
archie4oz said:
Both source and compiled binaries. I still like last year's contest winner the best though...

Archie, what kind of a demo was it? Any links? :D
 
Scott Matthew's marionette demo. Basically a marionette figure that you can move around and use the buttons to actuate the joints (he incorporated IK to calculate the animation of joint chains), and the the figure casts shadows into the environment (but not self shadowed).

You can get it from the Linux site as well... (BTW, the demos are from the SCE VU coding contest no from Assembly '02).
 
archie4oz said:
Scott Matthew's marionette demo. Basically a marionette figure that you can move around and use the buttons to actuate the joints (he incorporated IK to calculate the animation of joint chains), and the the figure casts shadows into the environment (but not self shadowed).

You can get it from the Linux site as well... (BTW, the demos are from the SCE VU coding contest no from Assembly '02).

ah that one! for some reason, I thought you were refering to another contest! Yeah, that was my favorite too (I called it puppet one above :LOL:).... mighty impressive!
 
...

To Vince

What you have posted is not complete and does not tell us what SCEI is claiming. After all, SCEI filed a patent for CELL despite it being designed by IBM. I know it for a fact that Emotion Engine was engineered at Toshiba America's San Diego office, and the same office was happy to sell you a variation of EE to others sans vector units.

To DeanoC

Why are you supposing both VUs are the same? Where ever you got that infomation from was wrong.
I know they are not. But they could be used the same if desired(Just ignore any VU1 specific instructions and keep the microcode size below 4 KB). And Sony's claim of 30 million polys/s peak with 1 lighting presumed the use of both VUs.

ShinHoshi

My question was if Sony wanted this philosophy to be applied or they had in mind another one.
I am sure SCEI realized VU0 was too slow for such purpose. It was just a marketing decision, to claim big FLOPS and polycount numbers..
 
What you have posted is not complete and does not tell us what SCEI is claiming.
The patent quote he posted clearly puts your "second VU was added at the last moment" rambling to a well deserved rest.

Besides that, you are arguing here with people with much more knowledge of the matter, and first hand experience of what they are talking about (DeanoC is a programmer at Konami, btw) - and they are making you sound sillier by every post.

Just stop making even bigger fool of yourself. Stop it, please...
 
30M using one VU

DM,

hmm.. ( scribbles frantically )

10 cycles per vertex...

4 to transform vertex
1 to apply perspective
3 to rotate normal
1 for ambient
1 for light

tada....

all on one VU ( yup - works on Linux kit )

I'm sure that you'll find the EE was designed as a whole.
There are lots of public docs available that show the thinking behind the architecture, as well as how devs have used it.
 
Paul said:
CELL was/is designed by Toshiba, IBM, and Sony.

Exactly. The basic architecture was layed out by a handfull of peoplefr om STI. From IBM (ie. the "I" in 'STI") their 5 person contact team was composed of Michael Gschwind, Jim Kahle, Chuck Moore, Marty Hopkins, and Peter Hofstee.
 
Teasy said:
Yeah you did make it clear and I didn't actually argue with what you said. What I was saying is that if PVRDC had virtual texturing it could complete its HSR and then only load in the visible textures. So I'm saying that the omission of virtual texturing was quite a big missed opportunity considering that PVRDC knows what is and isn't visible before textures are loaded. If it had control over texture upload it could have saved loads (66% in a normal scene) of texture space and bandwidth.
The bandwith IS saved that way - PVRDC performs texel fetch based on visibility criteria. Writes from external memory to VRam are a different matter then on other consoles - the only thing that really gets uploaded to VRam are vertex/display lists, textures would normally stay resident all along (unlike PS2/GC you can't swap them during rendering anyhow).
Anyway what you talk about would be important if there was a need to keep amount of VRam low but in case of DC the memory wasn't all that fast and consequently quite a lot cheaper then what the likes of GC/PS2 use for rasterizer memory - so it could have 'enough' of it. :p

No, PVRDC needed 2 cycles for trilinear AFAIR, not two passes.
We need Simon here :p but I distinctly remember that trilinear used two triangle passes, which was probably one of the main reasons DC titles pretty much never use trilinear.

Deadmeat said:
I know they are not. But they could be used the same if desired(Just ignore any VU1 specific instructions and keep the microcode size below 4 KB).
Look, this has been told to you 5times in this thread alone and you still ignore it : VU0 can't output results, VU1 can. That's a rather fundamental difference when you try to "equate" the use of two units.
Also keeping microprograms smaller will only come at cost of performance - optimized microcode is on average 2-3times larger then non-optimized. Not exactly desirable 'side'effect of "keeping" code small.
 
Back
Top