Kishonti GFXbench

Its not the Nexus, the build name suggests its the LG Optimus G2 for t-mobile

Nexus 4 had mako/occam codenames, this fits in with LGs product line rather than Nexus series

And other than fansites trolling for hits, it has never really been confirmed that LG has the contract this year

Whether or not LG got the contract the specific S800 variant would make a fine candidate for a Nexus5.
 
Impressive result relatively speaking, but still completely unplayable at these framerates. Using the 2.5 benchmark that is at least on the verge of being playable in terms of framerates, note that the SGX 554MP4 in ipad 4 appears to be about 20% faster in GLBenchmark 2.5 Egypt HD Offscreen (1080p) than the Adreno 330 used in this LG device (notice that the LG device tested here appears to have a 1080p screen resolution, so the 2.5 Egypt HD Onscreen score will be approximately equal to the 2.5 Egypt HD Offscreen (1080p) score).

As I said in the past it's the purpose of a synthetic benchmarks that aims to somewhat foresee future demands to be as stressful. The part where I'm having 2nd thoughts is where other folks pointed at wherever else I linked to the Adreno330 result; one ever repeating question that no one obviously can answer yet: "does it throttle?" I'm looking forward to Anandtech's analysis for the S800 from a final shipping device and it would be also nice if he'd revisit the Nexus4/LG issue to see if it has been rectified in the meantime. It's a shadow Qualcomm needs to get rid of as soon as possible, otherwise people will raise their eyebrow for quite some time to come when they see GLB results from their SoCs.

All in all though it's nothing new that Adrenos excel the more shader complexity rises. As I sidenote I could be wrong but judging from the iPhone5 score I'd expect the Exynos5410/GalaxyS4 score to be quite disappointing somewhere in the <600 frames league.
 
Whether or not LG got the contract the specific S800 variant would make a fine candidate for a Nexus5.

Looking at the timeframe for a Nexus 5 launch it would make sense to use SD800. Altough im curious of the bom for Nexus 4, considering Qualcomm replaced APQ8064 in a fairly short amount of time it makes me wonder if it was sold cheap
 
Looking at the timeframe for a Nexus 5 launch it would make sense to use SD800. Altough im curious of the bom for Nexus 4, considering Qualcomm replaced APQ8064 in a fairly short amount of time it makes me wonder if it was sold cheap

IRC, the Snapdragon 800 can be selected with an integrated 3G / LTE modem on die. As the modem is virtually the same price as the SoC in a traditional 2 chip solution, it will be interesting to see if this solution reduces total BOM, and its effect on power draw.
 
Impressive result relatively speaking, but still completely unplayable at these framerates.


Game developers tune their code to try and stay up at 60 FPS.

Benchmark developers improve longevity (for onscreen benchmarks) by doing what they can to avoid 60 FPS.

Once a benchmark is pegged at 60 FPS thanks to hardware evolution, people stop using it.

Writing offscreen benchmarks which are not limited by refresh rate is another way out of this, but people seem to like having onscreen presentation so they can see it for themselves, so then you have to confront the vsync issue, and Android has vsync always on.
 
By the way the S800/Adreno330 entry has been removed for unknown reasons from the Kishonti GLB2.7 database. Since the HTC One is already in a dangerous neighborhood compared to the iPad4 I'd expect the S600/GalaxyS4 score to either come damn close to the iPad4 score or even de-throne it again.
 
http://www.anandtech.com/show/6872/...ndows-tablets-compared-using-gldxbenchmark-27

Anandtech has done a bunch of GL/DXBenchmark 2.7 tests. They confirm that T-Rex HD is shader heavy with a 165% increase in shader complexity compared to Egypt HD whereas geometry, depth, and memory bandwidth has only gone up ~50%. The HD4000 ends up being 2-3x faster than the iPad 4's SGX554MP4 in Egypt and T-Rex, while the nVidia GT 640M LE (which might give a hint at how a Keplar Tegra 5 would perform) is 20-30% faster than the HD4000.
 
From that Anandtech link:

Both camps simply chose different optimization points on the power/performance curve, and both are presently working towards building what they don't have. The real debate isn't whether or not each side is capable of being faster or lower power, but which side will get there first, reliably and with a good business model.
.... while the nVidia GT 640M LE (which might give a hint at how a Keplar Tegra 5 would perform) is 20-30% faster than the HD4000.

With what kind of TDP? Besides you have in the Razor Edge GT640M LE 32 TMUs and 16 ROPs clocked at 500 (570MHz turbo) with a variable TDP for the GPU alone (depending on config) with a maximum of 20W from what I recall; dream on if you'd expect to get as many TMUs/ROPs in the next generation Tegra SoC. So while you're at it reducing texturing, z/stencil, z-kill and what not efficiency you might also want to consider how generous NV would be in terms of a raster/trisetup unit. Let's cut back a healthy portion of geometry throughput too since 2 rasters/trisetups sounds highly unlikely for such a design and stick with the possibility of meeting the ALU throughput alone.

Wouldn't NV want the marketing TDP of Tegra5 to be at say 5W tops?
 
With what kind of TDP?
The GT 640M LE is 20W compared to 17W for the whole Ivy Bridge so it doesn't look that efficient when it only achieves a 20-30% performance difference. It might be something DXBenchmark specific though. And as you say 20W is quite a ways from being Tegra compatible of course, so there's going to be differences, although the GT 640M LE may be what's in Kayla.
 
Last edited by a moderator:
Looking at the timeframe for a Nexus 5 launch it would make sense to use SD800. Altough im curious of the bom for Nexus 4, considering Qualcomm replaced APQ8064 in a fairly short amount of time it makes me wonder if it was sold cheap

Thing is, it actually wasn't that short a time when you compare how quickly the S4 was replaced by the S4 Pro (April to October). And the Snapdragon 600 shipping in March will probably be replaced in September with the 800 (another 6 months). This cadence is pretty unprecedented, especially considering its all on 28nm, though even Samsung is doing a release (Galaxy S) and clock uptweak (Galaxy Note) twice a year too. Even nvidia had multiple Tegra 3 SKUs.

Makes me wonder how sustainable this is.
 
Its good to have this comparison. .although I would have expected 640m LE to be substantially more powerfull than that?.

Ipad 4 doesn't look all that bad too be honest, cant wait to compare the next generation :)
 
Thing is, it actually wasn't that short a time when you compare how quickly the S4 was replaced by the S4 Pro (April to October). And the Snapdragon 600 shipping in March will probably be replaced in September with the 800 (another 6 months). This cadence is pretty unprecedented, especially considering its all on 28nm, though even Samsung is doing a release (Galaxy S) and clock uptweak (Galaxy Note) twice a year too. Even nvidia had multiple Tegra 3 SKUs.

Makes me wonder how sustainable this is.

We are talking about two different things here. S600 is not going to be replaced by S800, it will continue to exist as the mainstream chip whereas S800 will be the high end chip

APQ8064 however has been completely replaced by APQ8064T
 
http://www.anandtech.com/show/6872/...ndows-tablets-compared-using-gldxbenchmark-27

Anandtech has done a bunch of GL/DXBenchmark 2.7 tests. They confirm that T-Rex HD is shader heavy with a 165% increase in shader complexity compared to Egypt HD whereas geometry, depth, and memory bandwidth has only gone up ~50%. The HD4000 ends up being 2-3x faster than the iPad 4's SGX554MP4 in Egypt and T-Rex, while the nVidia GT 640M LE (which might give a hint at how a Keplar Tegra 5 would perform) is 20-30% faster than the HD4000.

Apparently Anand had erroneous data on the GT 640M LE-equipped Razer Edge. He has updated the article with corrected data, and now GT 640M LE is ~ 100% faster than HD 4000:

http://images.anandtech.com/graphs/graph6872/53938.png

http://images.anandtech.com/graphs/graph6872/53944.png

The differences may be even more pronounced with games (depending on the game and the detail settings).

On a side note, the Fill Rate data on GT 640M LE and HD 4000 still looks strange. How is it possible that GT 640M LE and HD 4000 gain ~ 50% and 100% texel fillrate, respectively, when moving from Onscreen to Offscreen (1080p)?
 
Last edited by a moderator:
The GT 640M LE is 20W compared to 17W for the whole Ivy Bridge so it doesn't look that efficient when it only achieves a 20-30% performance difference. It might be something DXBenchmark specific though. And as you say 20W is quite a ways from being Tegra compatible of course, so there's going to be differences, although the GT 640M LE may be what's in Kayla.

In terms of arithmetic throughput yes, but it's obviously not the entire story for GPU performance.
 
And as you say 20W is quite a ways from being Tegra compatible of course, so there's going to be differences, although the GT 640M LE may be what's in Kayla.

Technically Kayla is a not-yet-released GPU that supports OpenGL 4.3 (while GT 640M LE supports OpenGL 4.1). So Kayla is probably a modified lower power version of the newly announced GT 735M.
 
Technically Kayla is a not-yet-released GPU that supports OpenGL 4.3 (while GT 640M LE supports OpenGL 4.1). So Kayla is probably a modified lower power version of the newly announced GT 735M.

Probably yes; however either way it's hard to believe that it'll be a 1:1 performance mirror for all GPU aspects for the future Logan SoC GPU block. 32 TMUs would be insane in terms of die area and a 8x times increase compared to Wayne in terms of unit amount only, since DX11 TMUs are quite a bit more expensive than DX9L1 TMUs in Wayne.
 
Yes, the Tegra 5 "Logan" GPU will likely be significantly reworked compared to any current Kepler GPU variant (including Kayla).
 
Also somewhat annoying that the Exynos Galaxy S4 (and maybe other Android SGX devices?) fails to compile all the GLB2.7 shaders so it's impossible to benchmark.
We fixed it ;)
 
Back
Top