<-- This is an image (1280x720, 200 samples per pixel) that I rendered in OctaneRender Cloud for 1.79 Render Tokens (=$0.45). With no further knowledge of the scene, is there a way for me to unequivocally determine how many rays were fired per second?
The reason why I chose this resolution and spp is because I wanted to see what could've been rendered, at 30 fps, on the now scrapped PowerVR 120W ray tracing ASIC. This ASIC would've been capable of ~6 billion rays per second, or 720p30 @ 200 spp (https://render.otoy.com/forum/viewtopic.php?f=98&t=58757#p301588). So far, this seems like a fairly straightforward relationship: If you divide 6*10^9 rays per second by (1280 * 720 * 30 * 200) samples per pixel, you get ~1.
However, the next part is where things get tricky. We know that a dual GTX 1080 can run at 200-250 Mrays/s (https://www.reddit.com/r/Vive/comments/5nghyn/otoy_is_bringing_lightfield_baking_and_more_to/dchvkag). And, if our previous assumptions are right, we also know that this scene requires (1280 * 720 * 200) ~184 Mrays to complete, so it should take less than one second on those dual 1080s, right?
Now for the conflicting piece of data: Because the scene cost 1.79 RNDR (=$0.45) and took 124 seconds to render on ORC, we can calculate the total work (=OB). 1 RNDR = 256OB * 256s, so the total work done was (256 * 256 * 1.79) ~117,000 OB. Dual 1080s can do ~270 OB/s (https://render.otoy.com/octanebench/results.php?v=3.06.2&sort_by=avg&filter=&singleGPU=1).
In other words, it would take dual 1080s 117,000 OB / 270 OB/s ~433s to render the same scene that, according to our previous knowledge, should've taken only a fraction of a second. This is close to a three order-of-magnitude discrepancy!
What am I missing?