News about Rambus and the PS3

In short, what is the advantage of having subpixel triangles?

Well, if you're interested in REYES and micropolygons, you can read it here, I suppose.

http://www.cc.gatech.edu/dvfx/readings/cook-carpenter-catmull-s87.pdf

There is also stocasthic sampling article, but I think you need ACM account.


V3, if the GPU is Cell based they can surely split dicing and even shading... that would work quite well if I can add

Well, I haven't heard any revision from that patent, so I am just sticking to it, till they change it. Unless you have inside information, you SHOULD share ;)

I oeiginally assigned slicing 'n dicing and Shading to the CPU thanks to its 1 TFLOPS of processing power, but if the GPU is indeed Cell based sharing this load should not be too complicated as Apulets/Software Cells are supposed to be able to migrate if there is need of additional processing power ( Intelligent multi-processing ).

Well you can do 50/50 or what ever I guess, but the shading of micropolygons, is the one that going to eat the available FLOP performance.
 
Squeak said:
Okay, maybe I shouldn’t have mentioned voxels, as I’m not completely sure how they work, but what I meant was this: When you can generate multiple 3d points (that’s what vertices are, right?) per pixel, why bother sending all that extra information to draw triangles when you could just colour the 3d points and be done with it? In short, what is the advantage of having subpixel triangles?

I think one good reason might be that to calculate lighting accurately you need a normal, which means you need a surface (which a triangle has). I don't know how you would calculate accurate, directional lighting for a surface made of particles. For a bumpy surface, you'd have to try and give each particle a size (radius) and cast a shadow from that point, which would be even more complex for point or spot lighting.

Though I imagine self shadowing on a micro-polygon level would be quite expensive anyway.

Maybe I'm missing something pretty simple ... ?
 
Micro-polygons give you an advantage in displacement mapping ( where you should go in the sub-pixel level to do it correctly ) and it negates the need of per-pixel shaders for lighting as we can do vertex shading ( calculate lightign at the vertex level ) which with a polygon smaller than a pixel should be equal or better than pixel shading.

I do not think we are going to use shadow volumes in the future as with growth in the polygon count of models the more fill-rate and processing intensive the shadow volumes ( and related stencil operations ) become...

Micro-polygons are quadrilateras in the REYES implemnetation, so to render them you would need to only send two cordinates ( or maybe even only one, depending on how your rasterizer or the processor you use after the shading part... or for the shading part... it depends on how you set-up things :) ) renders particles...

Each co-rdinate would still contain depth info as the visibility part will need it to sort the micro-polygon correctly.

The regular models are not made of micro-polygons... they can use triangles, they can use quads or they can use HOS... all gets converted in micro-polygons...

See, this is the REYES pipeline:

Pipe_REYES.gif
 
Well, I haven't heard any revision from that patent, so I am just sticking to it, till they change it. Unless you have inside information, you SHOULD share ;)

I am going by the patent too...

Well you can do 50/50 or what ever I guess, but the shading of micropolygons, is the one that going to eat the available FLOP performance.

I know Shading is going to eat a lot of FLOPS ( it is also the part in which you sample textures ) , but that can be split... we can separate the sowrk in several software Cells and have the most of the CPU's APUs that are not doing A.I., physics, sound, etc... do Shading and the extra Apulets are sent to the GPU...
 
what i'm worried about is, all this Flops talk.... we already have GPU's that theoretically push hundreds of Gflops, but those Gflops are for shaders calculations and all that... i mean, people were saying the XGPU was pushing what, 80Gflops????

if we already have GPU's pushing, say, 200Gflops, then by 2006 a GPU that pushes, say, 500Glops wouldnt be THAT illogical.... still, if that's the measure by which that 80Gflops figure for XGPU was taken, then isnt that misleading???

or am i just very confused? i mean, i know i'm confused by nature, but u get my point... :D
 
zidane1strife said:
yes, nvidia's numbers are misleading.


i gathered that, but i would like to know where i stand on this..... u know what i mean.. i just need to know how these numbers are taken into consideration and stuff like that...
 
or am i just very confused? i mean, i know i'm confused by nature, but u get my point...

I don't think you're confused. Current high end PC GPUs aren't weak sauce.

Today's GPU can keep going with their vertex and fragment processor, making them faster, and adding more functinality, and will get a stunning result, if its not already is. They don't need to switch to REYES like architecture.

1 TFLOPS may sound like much today, 2 more years everyone will most likely be there.

As for PS3 and REYES like architecture, why don't they test PS2 with that architecture ?
 
V3 said:
or am i just very confused? i mean, i know i'm confused by nature, but u get my point...

I don't think you're confused. Current high end PC GPUs aren't weak sauce.

Today's GPU can keep going with their vertex and fragment processor, making them faster, and adding more functinality, and will get a stunning result, if its not already is. They don't need to switch to REYES like architecture.

1 TFLOPS may sound like much today, 2 more years everyone will most likely be there.

As for PS3 and REYES like architecture, why don't they test PS2 with that architecture ?



but the thing is, how can Nvidia or whoever else say that the XGPU can push like 80GFLOPS and stay serious?
i mean, the 6.4Gflops of PS2 is strictly correlated to its polygon pushing power.
With the XGPU that 80GFLOPS figure has to do with something else right.... that's where i'm confused...
if PS3 pushes 1Tflop the "ps2 way" then i'm happy, cuz that would mean LOADS of polys. but if it's the "XBOX way" then i'm concerned...
 
What I don't like is the fact even the top pc games look like more of the same...

That is to say they look like pumped up/beefed up current console games... more polys(in some cases lower), higher textures, better IQ and rez, slightly better animations, physics...

What I want to see is something that completely blows away what we have today...

edited
 
PC GPU's are burning a lot FLOPs with fixed function, that is good if you use those features, otherwise the power is gone.

I said it before and I say it again, avoiding fixed functions is Sony's thing. I am sure they'll stick to that approach.
 
i mean, the 6.4Gflops of PS2 is strictly correlated to its polygon pushing power.

Well, all those per pixel effect are done better using FLOP. But NV counted everything that are floating point in their pipeline AFAIK, nothing wrong with that. Those are GPU afterall. We don't really have standard for it.
 
ChryZ said:
PC GPU's are burning a lot FLOPs with fixed function, that is good if you use those features, otherwise the power is gone.

I said it before and I say it again, avoiding fixed functions is Sony's thing. I am sure they'll stick to that approach.


ok, so PS2's Flops measurement is what PS3's Flops measurement will be right...
 
V3, they did try it... with the GSCube that is...

PlayStation 2 doesn't fit the requirements exactly... it lacks CPU power and bandwidth, but still its approach is similar to what you would need... the GS LOVES small polygons, the GS seems to beable to push more polygons than the EE can provide ( listening to all the PlayStation 2 coders here )...
 
V3, they did try it... with the GSCube that is...

GSCube, doesn't come close to REYES. It was just well animated shaded high res poly model, done for high resolution display. The flow chart you posted earlier, how much of that did GSCube perform ?
 
Doesn't even come close ?

That was a 1+ Billion Vertices/s ( and they had a version with more processors too )...

How much of that chart or similar did it do ? Honestly, I cannot give you an exact answer as I do not know the details...

I do not know if they fully implemented REYES...

PlayStation 2 is an architecture which is friendly to the development you would do for a REYES-like architecture: it doesn't like multi-texturing too much, it doesn't like big polygons, it enjoys lots of tiny polygons and using vertex lighting...

Pushing that fast-forward hundreds of time would allow a micro-polygons based engine in real-time at 30-60 fps...

Let's not go strictly to the specs of that paper... by fast they meant 1 year :p
 
I posted it on the other PS3 thread.


I was googling for some screens for the up coming Hulk movie, come across this old article of SIGGRAPH 2000, I think.

http://www.gamasutra.com/features/20000804/crespo_01.htm

Interesting reminder of what to look forward for.

At the Sony booth, we enjoyed a real-time battle between characters from the movie Antz rendered in real time, as well as interactive sequences from the upcoming Final Fantasy movie shown at 1920x1080 pixels and a sustained rate of 60FPS.

In the Antz demo, I counted 140 ants, each comprising about 7,000 polygons, which were rendered using a ported version of Criterion's Renderware 3. All ants were texture mapped, and the results looked surprisingly close to the quality of the original movie. The Final Fantasy demo was just data from the now-in-development full-length CG movie based upon the game series, rendered in real time by the GScube. It showed a girl (with animated hair threads) in a zero-gravity spaceship, with a user-controllable camera viewpoint. The demo rendered about 314,000 polygons per frame, and included an impressive character with 161 joints, motion-blurring effects, and many other cinematic feats. According to Kazuyuki Hashimoto, senior vice president and CTO of Square USA, the GScube allowed them to show real-time quality, in "close to what is traditionally software rendered in about five hours." Sony believes that the GScube will deliver a tenfold improvement over a regular PS2, and future iterations of the architecture expect to reach a 100-fold improvement.

Don't think its REYES.
 
You are probably correct.. I thought they were using micro-polygons...

They surely had the power with the GSCube and with PlayStation 3 I think they surely will have the power to allow the use of micro-polygons...
 
They surely had the pwoer with the GSCube

Check out the # of bones they used.

Well, if they do the micropolygon route, they better have a good sampling algorithm in place.
 
Back
Top