Crappy video cards of today can't handle 178,800 triangles

K.I.L.E.R

Retarded moron
Veteran
I'm rendering a scene with a material, texture, 1 point light and a circle.

Incredibly boring scenary right?

My HW:
A64 3000+
Radeon 9700 Pro 128MB
1GB Kingston RAM

FPS = 13!

Total in scene:
Triangles: 178,800
Vertices: 89,700

How much memory is taken up?
How much bandwidth is taken up?

Here is my estimate:
9b(1 vertice = 3 coords(x, y, z), each vertice is 3b ) per triangle * 178,800 triangles = 1.6Mb

If you tripple that value you would get an est amount of cycles needed to calc lights and stuff.

Why is it going so slow on a crappy scene? Can't video cards handle this?

Doom 3 is more complex and somehow has 4x the frame rate.

BTW: I checked my code and it is RIGHT, nothing in my code is contributing to slow FPS.

What techniques would you guys use to render the same complex scene faster under GL?
 
Well, I'm storing all the information of the scene in system memory and video card ram.

That's about all I'm doing that's different from my norm.
 
How are you rendering the scene? Are you using vertex buffer objects, vertex arrays, display lists, or immediate mode (glVertex, glColour, etc)?

Immediate mode is extremely slow, being limited by both bandwidth and CPU speed. Vertex arrays solve the CPU limitation, but bandwidth may still be an issue. Display lists are pretty damn fast and it's not too difficult to convert to from immediate mode. Vertex buffer objects are also very fast. A test program I wrote for rendering a random heightmap doesn't slow down to 13 fps until I'm rendering well over a million triangles on my Radeon 9700 Pro.
 
OH CRAP!!

Now I'm getting 114fps with the same setting cept:
Triangles = 196,000
Verices = 9900

I just changed a setting. :?

It's that stupid sphere, there is a setting that tessellates it, increase in triangles (lighting wasn't enabled in ym first test, now it is. I forgot.) but massive decrease in vertices.
 
Does R300 support 32-bit indices natively?
I mean true support, not just accepting 32bit sized integers but requiring them to have 16bit values (<65536) ?
 
CouldntResist said:
Does R300 support 32-bit indices natively?
I mean true support, not just accepting 32bit sized integers but requiring them to have 16bit values (<65536) ?

The D3D caps report a MaxVertexIndex of 16,777,215.
So I suppose 32 bit indices work, as long as they have less than 24 significant bits. That shouldn't be much of a problem though :)
 
Re: Crappy video cards of today can't handle 178,800 triangl

K.I.L.E.R said:
9b(1 vertice = 3 coords(x, y, z), each vertice is 3b ) per triangle * 178,800 triangles = 1.6Mb
What? :oops:

Rule 1:
Use 4 unsigned bytes for per-vertex color. Use floats for everything else.

(there may be reasons for breaking that rule ... but save that for When Everything Is Up And Running)
 
Back
Top