What is PPP?

DemoCoder said:
Any references to back it up? At 120M transistors, I don't see much room to fit the PPP, so if it was removed, it must have been removed in the very very early stages, such as "oh, it was on the wishlist, but won't fit"

Unless of course, the PPP is already there, but non-functional on HW.

It was scrapped in early stages, although I'm not sure exactly when. Mid 2001 perhaps.
So some work was done on it, but it most likely never ever was tried in silicon.

Ailuros: Yeah, it's overkill - for the gaming community, at least!
Having a PPP while the competition wouldn't seems to be a nice advantage in the workstation market IMO.


Uttar
 
I don't think a PPP is overkill. We really need robust HOS to push the improved geometry power of modern video cards.
 
I did ask for an explanation in English but I guess I have to live with the fact that I will never truly understand what PPP is... :cry:
 
davepermen said:
sure. i just had no need for it, and don't see how my raytracer stuff will need it.

well, then again, yes, a PPP could get abused for per-frame-updates.. :D

Well, then how does your "ray tracing on the GPU" implementation work. You fire a ray through the scene, and intersect like 3 or 4 objects. How do you store your scene structure on the GPU, and how does your GPU iterate over the intersections? What if your scene's geometry changes every frame.

Multipass? Well, ooops, there goes performance.

As proven a few years ago, you can use OpenGL to implement any arbitrary computation on the GPU (without pixel shaders I might add) using multipass. So theoretically, you don't "need" anything modern GPUs have. But just because you can hack map some algorithm onto the GPU doesn't mean it will be efficient.
 
Tahir said:
I did ask for an explanation in English but I guess I have to live with the fact that I will never truly understand what PPP is... :cry:

Alright, to try to make it simple, the way I imagine a PPP would be implemented:

A PPP is a unit that operates on data provided to generate geometry. Normally (as it is today without a PPP), geometry is provided in vertex buffers. This geometry is just passed directly into the vertex shader one vertex at a time for each vertex that needs to be processed. A vertex shader operates on a single vertex only. It knows nothing about other vertices of a particular triangle.
A PPP would change this model by, instead of passing the vertices directly from the vertex buffers into the vertex shader, read data from a vertex buffer, process it in some way to generate triangles. The vertices of these triangles would then be processed by the vertex shader as usual.

Another possibility I'm thinking of would be that the PPP is integrated directly into the vertex shader. Basically, the PPP could just perform the transform and everything else that the vertex shader would otherwise do.
 
Ailuros said:
Uttar said:
A PPP was in the original NV30 designs AFAIK, although it was relatively scrapped I believe.

Both NV40 and R400 have PPPs, but the R420 does not have one.


Uttar

Irrelevant on which featurelist it is included, it's still somewhat transistor overkill nowadays, since they're basically restricted to OGL for the time being for full functionality. I honestly doubt that M$ is going to do anything about it prior to DX-Next.

Wouldn't a program using Nvidia's CG allow for a PPP to get taken advantage of?
 
Chalnoth said:
I don't think a PPP is overkill. We really need robust HOS to push the improved geometry power of modern video cards.

It's overkill in the sense that it's restricted at the moment to just one API and I frankly don't see many developers lately working with OGL anymore.
 
DemoCoder said:
davepermen said:
sure. i just had no need for it, and don't see how my raytracer stuff will need it.

well, then again, yes, a PPP could get abused for per-frame-updates.. :D

Well, then how does your "ray tracing on the GPU" implementation work. You fire a ray through the scene, and intersect like 3 or 4 objects. How do you store your scene structure on the GPU, and how does your GPU iterate over the intersections? What if your scene's geometry changes every frame.

Multipass? Well, ooops, there goes performance.

As proven a few years ago, you can use OpenGL to implement any arbitrary computation on the GPU (without pixel shaders I might add) using multipass. So theoretically, you don't "need" anything modern GPUs have. But just because you can hack map some algorithm onto the GPU doesn't mean it will be efficient.

not allowed to talk, NDA..

:D no, seriously. i don't actually use the gpu very much for real raytracing, main issue is the AGP there.. i would love to scedule some tasks to the gpu, and then do additional work again on the cpu.. but that will just not really work well before PCI-express cards..

there are other, much bigger issues, why raytracing on gpu's doesn't work that well yet..

on the other hand, 1500 fps when raytracing a sphere with ARB_fragment_program sounds promising :D thats fun:D (on 320x240.. 70fps on 1280x1024.. :D)
 
Ailuros said:
It's overkill in the sense that it's restricted at the moment to just one API and I frankly don't see many developers lately working with OGL anymore.
Why would it be restricted to just one API? We have yet to see a major feature exposed only in one API or the other.
 
davepermen said:
:D no, seriously. i don't actually use the gpu very much for real raytracing, main issue is the AGP there.. i would love to scedule some tasks to the gpu, and then do additional work again on the cpu.. but that will just not really work well before PCI-express cards..

Well, the context of this discussion was the PPP and it's relation to doing ray tracing on the GPU, as originally mentioned in a paper by NVidia's William Mark.

One of the issues of course, is accelerating traversal of scene data structures on the GPU. Today, that traversal is done by multipass.
 
Ailuros said:
It's overkill in the sense that it's restricted at the moment to just one API and I frankly don't see many developers lately working with OGL anymore.
Actually, the situation is not that bad. DX9 exposes at least some possible functionality of a PPP (but lacks the programmability, of course):
- N-Patches
- Triangular Bézier Patches
- Rectangular Bézier (linear, cubic, quintic), B-Spline (linear, cubic, quintic) and Catmull-Rom (cubic) Patches
- Adaptive tessellation for all of the above
 
Tahir,
PPP makes primitive assembly programmable. To give you an idea of how primitive assembly might be programmed, here are two prominent things that are available today, as fixed function:
_______________
seperate triangles
Code:
//these vars will be kept for the entire draw call
static int verts_in=0;

//temporary storage for transformed verts
static Vertex v[3];

void
onBegin()      //resets the vertex counter when a batch is started
{
   verts_in=0;
}

void
onVertex()     //'called' for every incoming vertex
{
   v[verts_in]=fetch_and_transform_one_vertex();
   ++verts_in;
   //if we have three verts, we dispatch a triangle to trisetup/rasterizer
   if (verts_in==3)
   {
       dispatch_triangle(v[0],v[1],v[2]);
       verts_in=0;
    }
}

_______________
triangle fan
Code:
//these vars will be kept for the entire draw call
static int verts_in=0;

//temporary storage for transformed verts
static Vertex v[3];

void
onBegin()      //resets the vertex counter when a batch is started
{
   verts_in=0;
}

void
onVertex()     //'called' for every incoming vertex
{
   v[verts_in]=fetch_and_transform_one_vertex();
   ++verts_in;
   //if we have three verts, we dispatch a triangle to trisetup/rasterizer
   if (verts_in==3)
   {
       dispatch_triangle(v[0],v[1],v[2]);
       verts_in=1;     //keep and reuse v[0]
    }
}
When you look closely, you'll notice that the only difference is the second line inside the if {}.

fetch_and_transform_one_vertex() would invoke the vertex shader or the fixed function transform, depending on current state (... or fetch from post-transform cache).

To put this in perspective with API usage:
Code:
glBegin(GL_TRIANGLES);  //selects a "PPP program", which is currently fixed function
                                      //also 'calls' onBegin()
glVertex*(<...>);             //'calls' onVertex()
glVertex*(<...>);
glVertex*(<...>);
<...>

glEnd();                          //ends a batch
Verts can of course be sourced from arrays of some sort, and with the help of indices, but that's the basic idea.
HTH :)

edit: ditched non-working bold inside of code tags, added hint towards difference
 
Tahir said:
I did ask for an explanation in English but I guess I have to live with the fact that I will never truly understand what PPP is... :cry:
Don't feel bad, I just read this thread thru twice and I still have no clue either.

It's threads like these that keep me humble and keep reminding me just how little I actually know, but that's half the fun of this place. :)
 
digitalwanderer said:
Don't feel bad, I just read this thread thru twice and I still have no clue either.

It's threads like these that keep me humble and keep reminding me just how little I actually know, but that's half the fun of this place. :)
Bah, it's not that complex. While the actual implementation may be, the idea is not.

A PPP is the name currently coined for a section of a GPU that would be able to generate new triangles in a general fashion. Modern GPU's either have no functions for generating new triangles, or have very limited functions. This has two primary benefits:

1. By creating geometry on the GPU, less information has to be transferred across the AGP bus, and less information has to be processed by the CPU.

2. By creating geometry on the GPU, it becomes easier to dynamically scale the amount of geometry in the scene, allowing for better performance characteristics across disparate architctures.
 
I thought one of the advantages of PPP would be a lower memory footprint for the geometry being used in a scene?
 
Brimstone said:
I thought one of the advantages of PPP would be a lower memory footprint for the geometry being used in a scene?
Yep. That's implied by Chalnoth's #1 where he says less less data is transfered over AGP.
 
how about using a PPP to convert non-polygonal objects into triangles for rendering? like would a PPP possibly be able to tesselate a NURBS surface into triangles to be fed to the vertex shader? That's the sort of thing I understand it could do.
 
Sage said:
how about using a PPP to convert non-polygonal objects into triangles for rendering? like would a PPP possibly be able to tesselate a NURBS surface into triangles to be fed to the vertex shader? That's the sort of thing I understand it could do.
Right, that's the way I understand it as well. The only question is whether or not the first iteration of PPP will be flexible and powerful enough to do such a thing within a game.
 
Back
Top