I get 130fps on a Radeon9700 PRO in default.
I'm actually a little disappointed in that figure. I only scanned Humus's code, but I suspect there's some optimizations to be done using buffered primitives/indices, range extensions, and stripification. I think the demo only uses procedural glVertex* calls for drawing. Huddy hammered us over and over at the last Mojo saying that sending 1 triangle at a time to the Radeon 9700 will destroy performance and cause the card to achieve only a fraction of its true performance.
I'm not criticizing Humus tho, it's clearly just a demonstration demo, and not supposed to be optimized. The code is written to be easy to read and understand. It's a very nice demo, and I suggest anyone look at Humus's code to learn GL programming.
-DC
p.s. now that I thought about it, it's not so disappointing. It's rendering 12 render targets (1 for each light, 1 for each cube face) plus the main rendertarget. So multiply that FPS by 13 to get the true number of "frames" being rendered.