french toast
Veteran
I give up.
What? Is that not what you said in my quote?
113 seconds and process as many frames as possible in that time frame. .. seems spot on to me mate.
I give up.
It's not a time limit, it's the duration of an animation sequence (warrior in shiny armour explores an ancient temple, a fight ensues). All GPUs complete the animation sequence in 113 seconds, but some render the animation more fluid than others. The 100th frame on one GPU can be a totally different point in animation time than the 100th frame on another GPU since the animation is time based, not frame based.Well I think I already get it...there is a 113 sec time limit and which ever gpu can complete as many frames as it can in that window gets the higher frames score... (which is then divided by the demo time in seconds to get the fps number)
If thats what you are both saying then I got it right the first time.. (except no fixed amount of frames..just time limit)
ETC is a texture compression, different vendors support different TCs, probably nvidia monitoring perf with different TC algoritms to be sure that they do not loosing anywhere, so nothing special hereOn The Verge's Tegra 4 video, they (nvidia) use a GLBenchmark test that I haven't seen before C24Z16 Offscreen ECT1, how does that compare with the regular C24Z16 Offscreen in terms of performance?
From 1.10 onwards
http://www.youtube.com/watch?v=asQUdRKYVrQ
Given the dominance of GLBenchmark, the competitiveness of the market and the history of some participants, do we have any reason to assume that the benchmark doesn't receive special "optimizations"?
Other than naiveté?
AFAIK there have been very significant changes since I left IMG, so it's likely different and doesn't have a single huge bottleneck anymore, but in an early version, I tried disabling the foliage (many alpha tested layers with an ALU-heavy pixel shader) by modifying the vertex shader to output a dummy position. The overall performance of the entire benchmark increased by about 3x on all handheld architectures.Is the T-Rex demo extremely shader heavy that the scores have been scrambled up that much between architectures?
330 gets about 30% higher sore in 2.7 than ipad4
http://glbenchmark.com/phonedetails.jsp?D=LG+D801&benchmark=glpro27
330 gets about 30% higher sore in 2.7 than ipad4
http://glbenchmark.com/phonedetails.jsp?D=LG+D801&benchmark=glpro27
http://forum.beyond3d.com/showthread.php?t=61025&page=15
Judging from the fillrate results only (which doesn't guarantee anything as the S800 obviously has more bandwidth too than a S600) the 330 might be clocked at ~480MHz and not at the 450MHz the Qualcomm roadmap indicated.
Hmm, a Snapdragon 800 clocked at only 1.7 GHz, smells fishy. Either the HPM process is not yielding the results they expected, or this is the Nexus 5 prototype which would still be a good boost over the N4, and not cane the battery.
330 gets about 30% higher sore in 2.7 than ipad4
http://glbenchmark.com/phonedetails.jsp?D=LG+D801&benchmark=glpro27