(multipass) problem combining spotlight texture and planar shadows

Hello everyone :)!

I have been working at a small tech demo (I am iterating on it, adding techniques as I learn them, as a way to learn the ropes of OpenGL ES programming on iDevices for an iPad application I am working on).
Due to its quite good pipeline, I did choose to start from the SIO2 engine, but the work I am doing now extends a bit beyond that whole staying within the bounds of OpenGL ES 1.1 which is what SIO2 targets.
Since I am working on something that is going to be run on SGX based devices (I am targeting the iPad mainly), I can go a bit beyond what the engine provides as the SIO2 engine is designed for maximum compatibility with MBX based iDevices (no stencil buffer, only 2 TU's, etc...). One thing that is gained from all of this is that code that works on playing around with the texture matrix does not need to worry about correct perspective corrected texturing/coordinates interpolation at the pixel level (which is a problem with the PowerVR MBX).

The two problems am having now (with this multi-pass implementation that follows the basic rendering loop outlined here: http://titan.cs.ukzn.ac.za/opengl/opengl-d6/adv-course/notes/node100.html) are:

1.) a false positive with the backprojection fix code (standard clipper texture approach: Plane's equation in the first row of the texture matrix used to sample the clipper texture, leaving the second and third row of the matrix empty, and only the fourth column of the fourth row set to 1.0f... white texture with black texel at (s,t) = (0,0)...)... some portions of the scene do not receive the projected texture because the pixel in question is flagged as sitting behind the projector's plane.

2.) the spotlight is not brightening the shadowed areas correctly... you can see that they do get a little bit brighter, but not enough. The planar shadows are rendered last (using stencil to first delimit the area they are cast upon and then to mark where they fall... changing the stencil test to recognize the shadowed areas, I re-render the shadow receiving plane with the blocked light turned off) and blended on top of the result of the previous passes.

Do you have any suggestions (beside "use multi-texturing you fool!") concerning those two issues?

You can see them in action here: http://www.youtube.com/watch?v=tGC6y_zT4MA

This is my rendering loop basically:
Code:
void templateRender( void )
{	
	const GLfloat light0Ambient[] = {0.08, 0.08, 0.08, 1.0};
	//// Basic rendering loop setup at the beginning of the current frame
	{
		glStencilMask(0xFF);
		
		glLightfv(GL_LIGHT0, GL_AMBIENT, light0Ambient);
		
		glClearColor(light0Ambient[0], light0Ambient[1], light0Ambient[2], 1);
		glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );
	}
	////
	
	MATRIX mView;
	[[EAGLView sharedGLView]matLookAt:&mView eye:sio2->_SIO2camera->_SIO2transform->loc 
					 vDir:sio2->_SIO2camera->_SIO2transform->dir upVector:(vec3*)&upWorld];
	/*Initialize the depth buffer.
    Clear the color buffer to a constant value which represents the scene ambient illumination.
    Draw the scene with depth buffering enabled and color buffer writes disabled.
    Load and enable the spotlight texture, set the texture environment to GL_MODULATE.
    Enable the texgen functions, load the texture matrix.
    Enable blending and set the blend function to GL_ONE, GL_ONE.
    Disable depth buffer updates and set the depth function to GL_EQUAL.
    Draw the scene with the vertex colors set to 1.0.
    Disable the spotlight texture, texgen and texture transformation.
    Set the blend function to GL_DST_COLOR.
    Draw the scene with normal illumination.*/
	glEnable(GL_BLEND);
	if(INVERTED_ZRANGE) glDepthFunc(GL_GREATER);
	else glDepthFunc(GL_LESS);
	
	[[EAGLView sharedGLView]renderCameraModelViewMatrix:&mView enableLandscape:TRUE];
	{
		BOOL projEnable = projectorEnable;
		//projEnable = NO;
		if(projEnable){
			glDisable(GL_LIGHTING);
			
			DEPTH_WRITES_ON();
			
			RGBA_WRITES_OFF();
			[[EAGLView sharedGLView] renderAllObjects];
			RGBA_WRITES_ON();
			
			glBlendEquationOES( GL_FUNC_ADD_OES );
			glBlendFunc( GL_ONE, GL_ONE );
			
			[[[EAGLView sharedGLView]sceneProjector] calculateProjector];

			glDepthFunc(GL_EQUAL);

			[[[EAGLView sharedGLView]sceneProjector] rotateZ:0.48f];
			[[[EAGLView sharedGLView]sceneProjector] submitProjectorGL];

			glEnable(GL_LIGHTING);
			//glEnable(GL_DEPTH_TEST);
		}
		
		DEPTH_WRITES_ON();
		RGBA_WRITES_OFF();
		[[EAGLView sharedGLView] renderAllObjects];
		RGBA_WRITES_ON();
		DEPTH_WRITES_OFF();
		
		sio2LampEnableLight();
		{	
			[[EAGLView sharedGLView]renderIPO];
			[[EAGLView sharedGLView]renderLights];	
			glEnable(GL_LIGHTING);
			glEnable(GL_LIGHT0);
			glLightfv(GL_LIGHT0, GL_AMBIENT, light0Ambient);
			
			glBlendEquationOES( GL_FUNC_ADD_OES );
			glBlendFunc( GL_ONE, GL_SRC_COLOR );
			
			if(INVERTED_ZRANGE) glDepthFunc(GL_GEQUAL);
			else glDepthFunc(GL_LEQUAL);
			
			[[EAGLView sharedGLView]renderShadowCasters];
			for (int i = 0; i < 3; i++) {
				[[EAGLView sharedGLView]renderShadowReceiverObject:shadowReceivers[i]];
			}
			
			renderPlanarShadows();
			
			//STENCIL_ON();
			//glEnable(GL_BLEND);
			//[[[EAGLView sharedGLView]sceneProjector] submitProjectorGL];
			
			//renderPlanarShadows();
			STENCIL_OFF();
		}
		
		SIO2camera* projector = ( SIO2camera * )sio2ResourceGet( sio2->_SIO2resource,
													SIO2_CAMERA,
													(char*)"camera/Projector");
		[[EAGLView sharedGLView]drawNormal:projector->_SIO2transform->dir ofPoint:projector->_SIO2transform->loc];
		[[EAGLView sharedGLView]drawCameraFrustum:projector aspectRatio:1.0f];
		
		SIO2_SOFT_RESET();
	}
	MODELVIEW_LANDSCAPE_OFF();
	
	//templateRender2D();
	[[EAGLView sharedGLView]setUpdateCameraModelViewMatrix:TRUE];
	[SysTools checkGLerrors];
	
	templateGameLoop();
}

http://code.google.com/p/si02-r-n-d..._iPhone_iPad_Application/Classes/Projector.mm (it contains the texture projection code, texture matrices included)

http://code.google.com/p/si02-r-n-d...C_iPhone_iPad_Application/Classes/EAGLView.mm (various math functions and general OpenGL ES glue code...)


P.S.: http://forum.sio2interactive.com/viewtopic.php?f=3&t=841&start=10#p4338 (some more blabbering about the techniques used in this demo)

http://forum.sio2interactive.com/viewtopic.php?f=3&t=841#p4296 (more blabbering about texture projection)

http://forum.sio2interactive.com/viewtopic.php?f=3&t=841#p4300 (even more blabbering about planar shadows and the use of the stencil buffer to fix that technique's shortcomings)
 
Last edited by a moderator:
I did solve the issue with the backprojection fix. It turns out I was not properly setting and resetting the client state when I provided the texture coordinates for each texture unit. I still have to get the hang of the separation between GL client (the app) and GL server (the GL driver) and what should be done in what.

I have also added a new feature, which needs some fine tuning as it is creating some side-effects too, which targets the problem of backfacing polygons (given the projector's point of view).
I use another texture stage for this and the great possibilities given by playing around with the texture matrix and the texture environment stages (you have 8 stages on the SGX).

If you structure the test to give a "pass" where the result is strictly positive (>0) and "fail" is any <= 0 result, you can even use the same backprojection fix texture with a black texel at (s,t) = (0,0) and white everywhere else with GL_CLAMP_TO_EDGE chosen for texture wrapping mode for both S and T coordinates.
Now, you need an equation that gives such result and tests each face to see if it looks at the projector or if it looks away from it.
if you take the same texture matrix you used for the backprojection fix, you arr quite close... dot(Nprojector, VectorA), assuming VectorA is a surface normal, tells you if the vector is parallel (facing away from the projector), anti-parallel (facing the projector), or perpendicular... all just by looking at the sign of the scalar result.
Still, such an equation gives you the opposite result compared to what you are looking for... you want the result that is >0 to mean antiparallel. This is easily fixed by providing the negated Nprojector unit normal.

The problem with this approach is to choose an appropriate VectorA for each surface. Just taking the vertex normal and send the list of vertices' normals as texture coordinates will work, but it will not look too nice... you will get some weird, non smooth, transitions when the projector moves across surfaces.
 
One thing that is gained from all of this is that code that works on playing around with the texture matrix does not need to worry about correct perspective corrected texturing/coordinates interpolation at the pixel level (which is a problem with the PowerVR MBX).
Sorry for the slight derail, but what specifically is the MBX problem?
 
Sorry for the slight derail, but what specifically is the MBX problem?

No derail happened :). Texture coordinates are not properly interpolated which makes projecting textures more complex than setting the Texture Matrix to Bias*ProjectionProjector*ViewProjector and submitting the vertex data as texture coordinates (it should still work, but it might not look as good as with proper interpolation).

http://developer.apple.com/library/...uide/OpenGLESPlatforms/OpenGLESPlatforms.html
Known Limitations and Issues

The PowerVR MBX implementation of OpenGL ES 1.1 has a number of limitations that are not shared by iPhone Simulator or the PowerVR SGX. :

[...]

Perspective-correct texturing is supported only for the S and T texture coordinates. The Q coordinate is not interpolated with perspective correction.

From what I heard it is interpolated, but per vertex and not per pixel, still... that's what I was referring to.
 
Last edited by a moderator:
I'm confused by that.

If you are doing projective textures, i.e so you want to interpolate (per pixel "in perspective") S'*Q, T'*Q, Q, and then get a final results of S = interpolated(S' * Q) / interpolated(Q) etc, there is no need to do the interpolation of S'*Q and Q 'in perspective' as the usual divisions by "w" on each of those would otherwise cancel out when the hardware does the final, per pixel, "interpolated(S' * Q) / interpolated(Q)" division.

I'm really not sure what is going wrong.

Cheers
Simon
 
I'm confused by that.

If you are doing projective textures, i.e so you want to interpolate (per pixel "in perspective") S'*Q, T'*Q, Q, and then get a final results of S = interpolated(S' * Q) / interpolated(Q) etc, there is no need to do the interpolation of S'*Q and Q 'in perspective' as the usual divisions by "w" on each of those would otherwise cancel out when the hardware does the final, per pixel, "interpolated(S' * Q) / interpolated(Q)" division.

I'm really not sure what is going wrong.

Cheers
Simon

Hello Simon,

I found this reference (there are more, but this is the first good one I could find this morning):

http://www.khronos.org/message_boar...33&sid=5f5cf222979b913054843da9ec21d3d6#p3333
Xmas said:
Both EYE_LINEAR and OBJECT_LINEAR texgen modes can easily be implemented using the texture matrix, and re-using the same vertex data array for both positions and texcoords. Simply use the plane coefficients you pass as OBJECT_PLANE for each of the texcoord components as a row of the texture matrix.

Note however that some OpenGL ES implementations don't perform perspective correct interpolation with projected texture coordinates (i.e. where Q != 1).

I think he is referring to the limitation on the MBX and other devices like it that the Apple doc was mentioning.

BTW, he posted on the Apple forums warning against the infinite far plane projection matrix trick... it did sound helpful for shadow volumes projected to infinity... :(...
P.S.: The problem I wanted to solve now was to implement a nice single pass solution (minus the shadows pass for now), fusing the projection and diffuse passes together.
I want to be able to say something like this:
OutPut_RGB = DiffuseTex_RGB + (ProjTex_RGB * BackProjFixTex_RGB * BackFaceCullTex_RGB)

This would give me the ability to only brighten the diffuse texture when the projector's texture hits the surface so to speak and only for those fragments, leaving the rest of the scene at its normal lighting/color conditions.
Right now, I am forced to keep the DiffuseTex in the first Texture unit (GL_TEXTURE0) and I apply the other three texture layers in GL_TEXTURE2, GL_TEXTURE3, GL_TEXTURE4 (I skip GL_TEXTURE1 because I am reserving it for another texture layer available to Blender).

This is a problem because I would have to setup the following equation:
OutPut_RGB = (ProjTex_RGB * BackProjFixTex_RGB * BackFaceCullTex_RGB) + DiffuseTex_RGB

which is like saying:
OutPut_RGB = (TEX_UNIT2_RGB * TEX_UNIT3__RGB * TEX_UNIT4_RGB) + TEX_UNIT0_RGB

I do not know how to make this equation come to life with the texture environment stages... not in a single pass...
Is moving the diffuse texture layer beyond the projector's related layers the only way to make the former equation work without resorting to multi-pass?
 
Last edited by a moderator:
The application has been updated to a single pass approach for everything but the planar shadows. Using 5 texture stages. I moved to TU5 and TU6 the two base texture slots that SIO2 uses.

The equation I am using now is basically this:

OUT_RGB = (PROJTEX_RGB*CLIPPERTEX_RGB*CU*LLERTEX_RGB*DIFFUSETEX_RGB) * (1-VERTEXLIGHT) + (DIFFUSETEX_RGB * VERTEXLIGHT)

Code:
-(void) submitProjectorGL {
	glEnable(GL_LIGHTING);
	
	BOOL fixActive = backprojectionFix;
	
	[self loadProjectorTex];
	[self loadAttenuationTex];	
	
	[self prepareSIO2_TEXENV];
	[self prepareDiffuse0_TEXENV];
	
	glActiveTexture(ATTENUATION_TEX);
	//glEnable(GL_TEXTURE_2D);
	
	glActiveTexture(BCULL_TEX);
	glEnable(GL_TEXTURE_2D);
	glActiveTexture(FIX_TEX);
	glEnable(GL_TEXTURE_2D);
	
	glActiveTexture(PROJ_TEX);
	glEnable(GL_TEXTURE_2D);
	
	glActiveTexture(PROJ_TEX);
	glClientActiveTexture(PROJ_TEX);
	
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Plane")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Plane_2")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Plane_3")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Cylinder")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Cylinder_2")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Cube")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Sphere")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Sphere_001")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Sphere_002")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Sphere_002.001")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Sphere_002.002")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Sphere_003")];
	[self sendVPosAsTexcoords:(SIO2object *)sio2ResourceGet(sio2->_SIO2resource, SIO2_OBJECT, (char*)"object/Sphere_004")];
	
	glActiveTexture(PROJ_TEX);
	glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
	glDisable(GL_TEXTURE_2D);
	
	if(fixActive){
		glActiveTexture(FIX_TEX);
		glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
		glDisable(GL_TEXTURE_2D);
		
		glActiveTexture(BCULL_TEX);
		glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
		glDisable(GL_TEXTURE_2D);
		
		glActiveTexture(ATTENUATION_TEX);
		glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
		glDisable(GL_TEXTURE_2D);
		
		glActiveTexture(DIFFUSE0_TEX);
		glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
		glDisable(GL_TEXTURE_2D);
	}
	
	glActiveTexture(PROJ_TEX);
	glClientActiveTexture(PROJ_TEX);
	glBindBuffer(GL_ARRAY_BUFFER, 0);//restores absolute GL pointers
	
	[SysTools checkGLerrors];
}

[...]

-(void) loadProjectorTex {
	static BOOL loaded = NO;
	
	glActiveTexture(PROJ_TEX);
	glBindTexture(GL_TEXTURE_2D, texture);	
	if (!loaded) {
		glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
		
		//Sample RGB, multiply by previous texunit result
		glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);   //Modulate RGB with RGB
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS);
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
		//Sample ALPHA, use previous texunit result
		glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_MODULATE);  //Modulate ALPHA with ALPHA
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS);
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA);
		
		// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
		
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
		
		//TODO: set TexEnv appropriately in order for multitexturing to enable texture projection to be done in a single pass (additive)
		
		// Specify a 2D texture image, providing the a pointer to the image data in memory
		glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
		
		loaded = YES;
	}
	
	BOOL fixActive = backprojectionFix;
	if(fixActive) [self loadBackProjectionFixTex];
}

-(void) loadBackProjectionFixTex {
	
	static BOOL loaded = NO;
	
	glActiveTexture(FIX_TEX);
	glBindTexture(GL_TEXTURE_2D, backprojectionFixTexture);
	if (!loaded) {glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
		//Sample RGB, multiply by previous texunit result
		glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);   //Modulate RGB with RGB
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS);
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
		//Sample ALPHA, use previous texunit result
		glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);  //Modulate ALPHA with ALPHA
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS);
		
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
		
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
		
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
		
		// Specify a 2D texture image, providing the a pointer to the image data in memory
		glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, widthFix, heightFix, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteDataFix);
		
		//loaded = YES;
	}
	
	glActiveTexture(BCULL_TEX);
	glBindTexture(GL_TEXTURE_2D, backprojectionFixTexture);
	if (!loaded) {glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
		//Sample RGB, multiply by previous texunit result
		glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);   //Modulate RGB with RGB
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS);
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
		//Sample ALPHA, use previous texunit result
		glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);  //Modulate ALPHA with ALPHA
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS);
		
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
		
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
		
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
		
		// Specify a 2D texture image, providing the a pointer to the image data in memory
		glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, widthFix, heightFix, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteDataFix);
		
		loaded = YES;
	}
}

-(void) loadAttenuationTex {
	BOOL loaded = NO;
	
	glActiveTexture(ATTENUATION_TEX);
	glBindTexture(GL_TEXTURE_2D, attenuationTexture);
	if (!loaded) {glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
		//Sample RGB, multiply by previous texunit result
		glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);   //Modulate RGB with RGB
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS);
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
		//Sample ALPHA, use previous texunit result
		glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);  //Modulate ALPHA with ALPHA
		glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS);
		
		glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
		
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
		
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
		glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
		
		// Specify a 2D texture image, providing the a pointer to the image data in memory
		glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, attWidth, attHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteDataAtt);
		
		loaded = YES;
	}
}

-(void) prepareSIO2_TEXENV {
	glActiveTexture(SIO2_TEX0);
	glClientActiveTexture(SIO2_TEX0);
	glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
	glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE);
	glTexEnvi(GL_TEXTURE_ENV, GL_RGB_SCALE, 1);
	
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS); //PREVIOUS = (PROJTEX * CLIPTEX * CULLTEX) * ATTENUATION * DIFFUSE0_SIO2
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_PRIMARY_COLOR); //PRIMARY_COLOR = VERTEX_LIGHT
	
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_ONE_MINUS_SRC_COLOR);
	
	//Sample ALPHA, use previous texunit result
	glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_ADD);  //Modulate ALPHA with ALPHA
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS);
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE);
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA);	
}

-(void) prepareDiffuse0_TEXENV {
	glActiveTexture(DIFFUSE0_TEX);
	glClientActiveTexture(DIFFUSE0_TEX);
	glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
	glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);
	glTexEnvi(GL_TEXTURE_ENV, GL_RGB_SCALE, 4);
	
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS); //PREVIOUS = PROJTEX * CLIPTEX * CULLTEX * ATTENUATION
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE);
	
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
	
	//Sample ALPHA, use previous texunit result
	glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_ADD);  //Modulate ALPHA with ALPHA
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS);
	glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE);
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);
	glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA);	
}

-(void) sendVPosAsTexcoords:(SIO2object*)obj_arg{
	[SysTools checkGLerrors];
	vec3 plane = *projector->_SIO2transform->dir;
	BOOL fixActive = backprojectionFix;
	
	SIO2object* obj = obj_arg;//( SIO2object*)obj_arg->_SIO2instance;
	
	if (VALID_STR(obj_arg->instname)) {
		NSLog(@"name of instance: %s", obj_arg->instname);
		obj = ( SIO2object*)obj_arg->_SIO2instance;
		glBindBuffer(GL_ARRAY_BUFFER, obj->vbo); //following GL client pointers are relative to this VBO
	}
	else {
		glBindBuffer(GL_ARRAY_BUFFER, obj->vbo);
	}
	
	//if( TU0_USED(obj) ) NSLog (@"\n TU0 ......................... \n");
	//if( TU1_USED(obj) ) NSLog (@"\n TU1......................... \n");
	
	if( !TU0_USED(obj) ) {
		glActiveTexture(PROJ_TEX);
		glDisable(GL_TEXTURE_2D);
		
		glActiveTexture(FIX_TEX);
		glDisable(GL_TEXTURE_2D);
		
		glActiveTexture(BCULL_TEX);
		glDisable(GL_TEXTURE_2D);
		
		glActiveTexture(ATTENUATION_TEX);
		glDisable(GL_TEXTURE_2D);
	}
	
	glMatrixMode(GL_MODELVIEW);
	glPushMatrix();
	{
		glActiveTexture(PROJ_TEX);
		glMatrixMode(GL_TEXTURE);
		glPushMatrix();
		{
			glLoadMatrixf(mTextureProjection.f);
			
			glTranslatef(obj_arg->_SIO2transform->loc->x, 
						 obj_arg->_SIO2transform->loc->y, 
						 obj_arg->_SIO2transform->loc->z);
			
			glClientActiveTexture(PROJ_TEX);
			glTexCoordPointer(3, GL_FLOAT, 0, (void*) NULL);
			glEnableClientState(GL_TEXTURE_COORD_ARRAY);
			
			if (fixActive) {
				glActiveTexture(FIX_TEX);
				glMatrixMode(GL_TEXTURE);
				glPushMatrix();
				{
					glLoadMatrixf(mBackProjectionFix.f);
					
					glTranslatef(obj_arg->_SIO2transform->loc->x, 
								 obj_arg->_SIO2transform->loc->y, 
								 obj_arg->_SIO2transform->loc->z);
					
					glClientActiveTexture(FIX_TEX);
					glTexCoordPointer(3, GL_FLOAT, 0, (void*) NULL); //vertices are at the beginning of the VBO
					glEnableClientState(GL_TEXTURE_COORD_ARRAY);
				}
				
				glActiveTexture(BCULL_TEX);
				glMatrixMode(GL_TEXTURE);
				glPushMatrix();
				{
					glLoadMatrixf(mBackFacesFix.f);
					
					glClientActiveTexture(BCULL_TEX);
					glTexCoordPointer(3, GL_FLOAT, 0, SIO2_BUFFER_OFFSET(obj->vbo_offset[ SIO2_OBJECT_NORMALS ]) );
					glEnableClientState(GL_TEXTURE_COORD_ARRAY);
				}
				
				glActiveTexture(ATTENUATION_TEX);
				glMatrixMode(GL_TEXTURE);
				glPushMatrix();
				{
					glLoadMatrixf(mBackProjectionFix.f);
					
					glTranslatef(obj_arg->_SIO2transform->loc->x, 
								 obj_arg->_SIO2transform->loc->y, 
								 obj_arg->_SIO2transform->loc->z);
					
					glClientActiveTexture(ATTENUATION_TEX);
					glTexCoordPointer(3, GL_FLOAT, 0, (void*) NULL); //vertices are at the beginning of the VBO
					glEnableClientState(GL_TEXTURE_COORD_ARRAY);
				}
				
				if(TU0_USED(obj)) 
				{glActiveTexture(DIFFUSE0_TEX);
					{
						SIO2material* mat = (SIO2material*)sio2ResourceGet(sio2->_SIO2resource, SIO2_MATERIAL, (char*) "material/Material");
						//sio2MaterialRender(mat);
						
						sio2ImageRender( mat->_SIO2image[ 0 ] );
						
						if( mat->_SIO2image[ 0 ]->_SIO2imagebind )
						{ //mat->_SIO2image[ 0 ]->_SIO2imagebind( mat->_SIO2image[ 0 ], 0 );
						}						
						glEnable(GL_TEXTURE_2D);
						glClientActiveTexture(DIFFUSE0_TEX);
						
						glTexCoordPointer(2, GL_FLOAT, 0, SIO2_BUFFER_OFFSET(obj->vbo_offset[ SIO2_OBJECT_TEXUV0 ]) );
						glEnableClientState(GL_TEXTURE_COORD_ARRAY);
					}
				}
			}
		}
	}
	glMatrixMode(GL_MODELVIEW);
	glPopMatrix();
	
	if (!obj_arg) {CMLog (@"(!) error, pointer to SIO2 object is NULL(!)");return;}
	
	glActiveTexture(SIO2_TEX0);
	glClientActiveTexture(SIO2_TEX0);
	
	glEnableClientState(GL_VERTEX_ARRAY);
	glEnableClientState(GL_TEXTURE_COORD_ARRAY);
	
	int _mask = SIO2_LAMP | SIO2_OBJECT_SOLID;
	if(obj_arg->dst ){
		sio2ObjectRender( obj_arg, sio2->_SIO2window, sio2->_SIO2camera,
						 !( _mask & SIO2_RENDER_NO_MATERIAL ),
						 !( _mask & SIO2_RENDER_NO_MATRIX ) );
	}
	glDisableClientState(GL_VERTEX_ARRAY);
	glDisableClientState(GL_TEXTURE_COORD_ARRAY);
	
	SIO2_SOFT_RESET();
	
	if(fixActive) {
		glActiveTexture(FIX_TEX);
		glClientActiveTexture(FIX_TEX);
		glDisableClientState(GL_TEXTURE_COORD_ARRAY);
		glMatrixMode(GL_TEXTURE);
		glPopMatrix();
		
		glActiveTexture(BCULL_TEX);
		glClientActiveTexture(BCULL_TEX);
		glDisableClientState(GL_TEXTURE_COORD_ARRAY);
		glMatrixMode(GL_TEXTURE);
		glPopMatrix();
		
		glActiveTexture(ATTENUATION_TEX);
		glClientActiveTexture(ATTENUATION_TEX);
		glDisableClientState(GL_TEXTURE_COORD_ARRAY);
		glMatrixMode(GL_TEXTURE);
		glPopMatrix();
		
		glActiveTexture(DIFFUSE0_TEX);
		glClientActiveTexture(DIFFUSE0_TEX);
		glDisableClientState(GL_TEXTURE_COORD_ARRAY);
		glDisable(GL_TEXTURE_2D);

	}
	
	glActiveTexture(PROJ_TEX);
	glClientActiveTexture(PROJ_TEX);
	glDisableClientState(GL_TEXTURE_COORD_ARRAY);
	glMatrixMode(GL_TEXTURE);
	glPopMatrix();
	
	glMatrixMode(GL_MODELVIEW);
	//glPopMatrix();
}
 
I am wondering if I should go back to multipass a bit...

The base engine I am working on is SIO2 which is built on top of OpenGL ES 1.1. It supports vertex colors, vertex lighting (diffuse, spotlight, etc... Blender defined materials), etc...

The idea with SIO2 is to work in Blender, define lights, materials, etc... and then export and play back in-engine. The more you can do in Blender and see on screen with the least coding on the engine side, the better it is (for artists and programmers).

I am trying different things and while I am almost at the kind of lighting level I want, there are still problems such as the one you could see in the code itself when the projector shines on a wall where the vertex light is quite strong... the the projection gets dimmed out (it is multiplied by "1-VertexLight" after all) and looks very washed out. Avoiding the projector from producing desaturated in the single pass implementation was kind of challenging too... I am using 5 texture stages now and I feel like I could use one more at least if I wanted to get the effect looking better... still, a lot of stages just for a projector's effect... perhaps simplifying the situation could allow for a simple diffuse+vlight pass followed by a pass that accumulates the effect of one or more projectors.
 
Back
Top