Broadcom BCM2763 or Videocore IV

Maybe if you hold of a bit on the flamey sarcasm next time, we'll both avoid this "irrelevant content"?
:)

Still irrelevant.

I think I didn't write what I meant, hence the confusion.

Accepted. Unfortunately I can reply only to what I read and not what is in everyone's head. And no before you again take it personally (which it is never meant as such actually), it can happen to everyone.

The formula to attain the fillrate score has the number of rendered pixels as a variable, so that the score itself presents as the GPU capabilities and doesn't vary with the device's resolution.

darkblu's answer above covers another point against that.

You can re-check that by looking at the Marvell Armada 1080p tablet and 480*320 phone, using the exact same SoC, RAM, etc and achieving the exact same fillrate score -> Using GLBenchmark 1.1.

Either I'm missing something or there's no mention anywhere about any sort of frequencies. I personally wouldn't expect even reference platforms like those to have the same SoC run at the same frequencies for a 1080p tablet and a 320*480 smart-phone. You can't expect similar power consumption for such high differences.
 
darkblu's answer above covers another point against that.
Wait, I must have missed something in this thread. So what was your original argument against the fillrate comparison if it was not the vsync factor? I mean, I can think of some, but they'd be all the benchmark's fault, so to speak, whereas vsync is an 'act of god'. Well, technically, even that can be worked around by clever techniques, but most benchmarks don't bother (that's speaking from principle, dunnot what GLbenchamrk actually does).
 
Wait, I must have missed something in this thread. So what was your original argument against the fillrate comparison if it was not the vsync factor? I mean, I can think of some, but they'd be all the benchmark's fault, so to speak, whereas vsync is an 'act of god'. Well, technically, even that can be worked around by clever techniques, but most benchmarks don't bother (that's speaking from principle, dunnot what GLbenchamrk actually does).

It was not just the vsync factor; I'm not even yet convinced that the application is measuring fill-rates factoring in resolution. Besides that one of my other points were that fill-rate isn't unrelated to bandwidth. What if 2.5 appears in the foreseeable future and measures in 1080p and the iPad2 incidentially poses a larger gap in terms of fillrate measurings compared to any other device?

I don't have any precise idea what GL benchmark exactly does, the only thing I'm assuming without being completely sure is that 2.1 with it's warm up test thing could be measuring single-texturing, which if true is quite strange.
 
Yes, I read your argument with Ailuros. Unfortunately the presence of vsync cannot be easily discarded even for results that are not capped by vsync. Of course, the lower the result, the less it will be affected by vsync, but even for something like 25fps (40ms frame duration), you can have up to 16ms of vsync waits, based on sheer bad luck of having your frames posted shortly after the previous vsync cycle. As you can see, 16/40 is not a negligible 'ballast' to speak of. Those 25fps could actually be anything between 25 (your frames were lucky to always come at a vsync) and 41fps (your frames always took a full 1/60th of a sec hit) once you turn vsync off. Even if we assumed luck to be impartial, say 50/50, you'd still be looking at (25 + 41) / 2 = 33fps of potential performance at vsync off, for the same '25@vsync-on' case - that's a 32% gain from vsync off.

Of course that's only if you're not using triple buffering, which will prevent the vsync waits from stalling rendering.
 
Of course that's only if you're not using triple buffering, which will prevent the vsync waits from stalling rendering.
Partially, yes, depending on the spikes in frame posting. Tiple buffering is a rather limited version of ring buffering, and as such it cannot tackle wide spikes.
 
Experienced programmers can:

Rebuild the BCM21553 Android 4.0 graphics stack from source
Develop fully open drivers for other VideoCore devices, including the Raspberry Pi’s BCM2835 and the BCM21654 (a low-cost 3G integrated baseband for emerging markets).
Gain insight into the internal operation of VideoCore for performance tuning purposes
Write general-purpose code leveraging the GPU compute capability on VideoCore devices


The VideoCore IV can do GPGPU?
This may sound like a very silly question from a layman, but I thought GPUs without dedicated general purpose hardware (i.e. OpenCL compliance) couldn't return values from the vmem's allocated space that could be readable by the CPU.. Then again, this is UMA so maybe that solves all.

I'd like to know this because I'm using a Rasp Pi for a research project and the GPU could be really useful for some DSP stuff I'll have to make (that ARM11 won't do any miracles).
 
Back
Top