NVIDIA Tegra Architecture

Could you do some real-world tests?

E.g.
- Charge both phones in the evening, then disconnect them at night, see how much battery is left in the morning.
- From fully charged, run glbench for 10min, check battery afterwards. Also, check differences between bench scores before and after to see impact of heat etc.
Yep no problem. I will start tomorrow with my N5 then next week with T4 Mi3 when my wife will come back from business trip.
 
Yep no problem. I will start tomorrow with my N5 then next week with T4 Mi3 when my wife will come back from business trip.
Great! I tried to google some Mi3 reviews, but the ones in English are lacking in the numbers department, and are based on the QC chip.
 
Could you do some real-world tests?

E.g.
- Charge both phones in the evening, then disconnect them at night, see how much battery is left in the morning.

Both of the phones have used batteries and may not have full capacity anymore because of many charge/discharge cycles.

To get correct results the batteries should be new.
 
Both of the phones have used batteries and may not have full capacity anymore because of many charge/discharge cycles. To get correct results the batteries should be new.
True, but they were bought around the same time and have seen similar usage patterns, first order results should be in the same ballpark.
 
True, but they were bought around the same time and have seen similar usage patterns, first order results should be in the same ballpark.

Besides, xpea's results would only give us an estimate anyway given that there will be variations even between different Nexus 5s and Mi3s.
 
Ailuros said:
At least we have you to bump into every other conversation trying to play devil's advocate without any more substance than the above that you oppose to. While you're busy chasing your own tail it might be more useful for a change to consider that instead of assisting in keeping debates on a civilized level you're in reality rather spilling oil into the fire.

Ah, the ad hominem, trusty old standby.
 
Besides, xpea's results would only give us an estimate anyway given that there will be variations even between different Nexus 5s and Mi3s.

The roughly two hours (122min) Gfxbench3.0 estimates based on its runs for the T4 aren't a crime; the iPhone5S doesn't last any longer under it in fact it's about the same. It's just that the S800 Xiaomi Mi3 variant lasts almost an hour longer under the same test while at the same time delivering about 60% more performance is something that's probably too hard to swallow. If you then ask for some other kind of documentation or substantial data that would prove things are different all you get are cheap shots under the belt or cheesy songs from antiquity which I was happy that I had forgotten about them.

The next best thing I'm expecting to hear is that perf/mW is irrelevant in this market :rolleyes:

xpea can run actually gfxbench3.0 (as anyone else with a OGL_ES2.0 GPU in his smartphone) and just pick the "battery lifetime" test. In that the Kishonti data shows that with 30 consequtive runs of T-Rex the T4/Mi3 doesn't throttle and in the latter it records battery persentage before and after the 30 runs and estimates how much the battery would last running that thing until the battery is empty. It doesn't take as long as it sounds; and yes while the battery age might be a point I believe that even there his wife's phone will show about 2 hours.
 
Doesn't the battery life test in GFXBench normalize for framerate? Since the S800 GPU is significantly faster than the T4 GPU in GFXBench 2.0, it's not much of a surprise that for a given framerate it would consume significantly less power. The S800 SoC also has the advantage of a more advanced and more power efficient 28nm HPM fab process.
 
Doesn't the battery life test in GFXBench normalize for framerate?

How do you mean? (granted I know a few things about GFXbench but it's not that I use it on a daily basis either). This is how I understand it so far: if you run the T-Rex onscreen test it'll give you 17+ fps for the T4Mi3. When it comes to the "Battery test - lifetime" it'll run again T-Rex onscreen for 30x times in a row and logs the lowest result achieved. The T4/Mi3 again achieves the same result as above which means that it doesn't throttle at least in that test; in another case where you have for instance T-Rex onscreen 20 fps and a "long term performance" score of 16 fps then the GPU is obviously throttling by up to 20% during that test.

On a sidenote I just tried something else to see how the sw reacts: plugged it in for the battery to charge and tried to run the battery test: it just stopped since it detected that it's charging :LOL:

In any case one has to be careful with any of the displayed results too:

http://gfxbench.com/subtest_results_of_device.jsp?D=Xiaomi+MI+3W&id=559&benchmark=gfx30

Look at the two top results; I don't buy them to be honest. There must be a loophole somewhere in the benchmark which isn't all too surprising since it's not that easy to measure battery life either. As I said mostly an indication.

Since the S800 GPU is significantly faster than the T4 GPU in GFXBench 2.0, it's not much of a surprise that for a given framerate it would consume significantly less power. The S800 SoC also has the advantage of a more advanced and more power efficient 28nm HPM fab process.
Beyond doubt yes; SoC manufacturers don't have the same development/release cadence.

However the process in such a case just increases the efficiency a newer architecture has already. If I now jump on GK20A/K1 which is also manufactured under TSMC 28HPm I can see the following so far (always where I have fillrate results to get a better picture):

Lenovo K1, GPU frequency estimated at ~290MHz

~110 GFLOPs FP32 = 11.5 fps offscreen Manhattan

Adreno330@580MHz

~148 GFLOPs FP32 = 11.9 fps offscreen Manhattan

While of course there's no indication about power consumption yet for the first, the so far data indicates a better perf/GFLOP ratio for the first if you compare those two.

***edit: by the way NV is the first from what I can see with an OGL_ES 3.1 driver, since no one seems to have noticed so far.
 
A7 delivered its performance within the confines of a compact phone and battery, manufactured with widespread availability almost a year earlier (by the time a mobile consumer K1 device is widely available) on a significantly less efficient fabrication process, so I'm not sold on nVidia's claims so far.

To its credit, K1 is doing its thing with really high-end feature set/API support.
 
Last edited by a moderator:
A7 delivered its performance within the confines of a compact phone and battery, manufactured with widespread availability almost a year earlier (by the time a mobile consumer K1 device is widely available) on a significantly less efficient fabrication process, so I'm not sold on nVidia's claims so far.

To its credit, K1 is doing its thing with really high-end feature set/API support.

Within the confines of a mobile phone's power envelope, Tegra K1 should have ~ 1.5x higher GPU performance than A7. Note that A7 has only marginally higher GPU performance and roughly comparable perf. per watt (in GFXBench 3.0) than S800.

Since Apple controls both hardware design and software OS design and only needs to design the SoC for use in one iPhone and one or two iPad variants, it's no surprise that they can move more quickly than most others.

Tegra K1 in Xiaomi's Mi Pad has ~ 2.3x higher GPU performance than the seven month old iPad Air, with a much more robust and forward looking graphics and compute feature set, so I'd say that it more than meets expectations for an SoC due out at or near middle of 2014.
 
Within the confines of a mobile phone's power envelope, Tegra K1 should have ~ 1.5x higher GPU performance than A7. Note that A7 has only marginally higher GPU performance and roughly comparable perf. per watt (in GFXBench 3.0) than S800.

We'll see that when we actually see a smartphone design win. Not impossible by all means, but until it actually gets announced and arrives on shelves a lot can happen in the meantime.

Since Apple controls both hardware design and software OS design and only needs to design the SoC for use in one iPhone and one or two iPad variants, it's no surprise that they can move more quickly than most others.

It's the first time they used one and the same SoC for all devices with A7 and before that and for more than one generations shipped in huge quantities on a yearly cadence. There's no guarantee it'll repeat itself as a trend with this coming generation yet. For this past generation the iPadAir compared to the iPhone5S came out way too underpowered.

Tegra K1 in Xiaomi's Mi Pad has ~ 2.3x higher GPU performance than the seven month old iPad Air, with a much more robust and forward looking graphics and compute feature set, so I'd say that it more than meets expectations for an SoC due out at or near middle of 2014.

The MiPad hasn't shipped yet and Xiaomi hasn't projected it before June either, if all things go according to plan and just in China as a start. Apple's next i-gear should be only a few months away but that's not the point here either: since I myself am not fond of getting locked into Apple's weird restrictions and still consider their hw way overprized for what they offer I can only welcome any effort with comparable efficiency and way more reasonable prices.

The Mi Pad will be available in China starting in June as an open beta – select fans will get the option to go hands-on with pre-production units.

http://blogs.nvidia.com/blog/2014/05/15/nvidias-tegra-k1-powers-xiaomis-first-tablet/

Considering the months I'd have to wait for worldwide availability Apple could even beat them to the punch. Either way give me a =/>9.7" with a K1 and an equally attractive price as the MiPad (albeit higher) and I'm sold.

***edit: I could be wrong but those MiPad results smell like 700-750MHz to me for the GPU.
 
A Lenovo smart TV with a K1 appeared in the Kishonti database with 54+ fps in T-Rex offscreen; its fill rate suggests a GPU frequency of ~600MHz. Considering the MiPad is at 59+ fps I leave it up to you to figure out the possible frequency in that one. At the original claimed 950MHz it should yield 86+ fps in that one and yes GK20A is as efficient. Again NV never ever claimed that you' ll get 950 or 850 MHz in an ultra thin tablet, it's just that some wet dreams went too far. We still have a lot of data to collect and verify, but I don't see anything that suggests 850MHz in thin tablets either.

linkadink: http://gfxbench.com/device.jsp?benchmark=gfx30&os=Android&api=gl&D=Lenovo ThinkVision 28
 
A Lenovo smart TV with a K1 appeared in the Kishonti database with 54+ fps in T-Rex offscreen; its fill rate suggests a GPU frequency of ~600MHz. Considering the MiPad is at 59+ fps I leave it up to you to figure out the possible frequency in that one. At the original claimed 950MHz it should yield 86+ fps in that one and yes GK20A is as efficient. Again NV never ever claimed that you' ll get 950 or 850 MHz in an ultra thin tablet, it's just that some wet dreams went too far. We still have a lot of data to collect and verify, but I don't see anything that suggests 850MHz in thin tablets either.

linkadink: http://gfxbench.com/device.jsp?benchmark=gfx30&os=Android&api=gl&D=Lenovo ThinkVision 28


NV never claimed 850 or 950 MHz in a thin tablet, but lots of folk on the internet claimed the Lenovo 28" television represented the pinnacle of K1 performance. Interesting to see that it's clocked higher in a thin tablet than in a television.
 
NV never claimed 850 or 950 MHz in a thin tablet, but lots of folk on the internet claimed the Lenovo 28" television represented the pinnacle of K1 performance. Interesting to see that it's clocked higher in a thin tablet than in a television.

When Tomshardware measured the Lenovo Thinkvision 28 at CES 2014, the GPU clock operating frequency was listed at ~ 800MHz:

thinkvision28-tegrak1-ces2014,L-Q-418094-22.jpg


NVIDIA has also gone on the record (in an interview posted online with Tony Tamasi) that GPU clock operating frequencies in TK1 will be much higher than T4. What this means is that GFXBench measured fill rate may not be the best way to extrapolate for frequency (and in fact, if you look at the offscreen fillrate results, you can see that the numbers started at ~ 3700 Mtexels/s and are now up to ~ 4600 Mtexels/s, which is already a pretty wide range). Hopefully we will find out next month when some people start to get their hands on Mi Pad.
 
Last edited by a moderator:
NV never claimed 850 or 950 MHz in a thin tablet, but lots of folk on the internet claimed the Lenovo 28" television represented the pinnacle of K1 performance. Interesting to see that it's clocked higher in a thin tablet than in a television.

[strike]I think you're mixing up the Lenovo AIO which truly has a 28" screen with the 62" 4k smart TV from Lenovo, at least that's what the info states:[/strike]

http://gfxbench.com/device.jsp?benc...api=gl&D=Lenovo+ThinkVision+28&testgroup=info

***edit: seems the info is bullocks on that info page at the Kishonti database.
 
When Tomshardware measured the Lenovo Thinkvision 28 at CES 2014, the GPU clock operating frequency was listed at ~ 800MHz:

Just out of curiosity where does it anywhere state that the 804MHz are for the GPU frequency? Furthermore what are the 3 "stopped" for? A quad GPU config for which they stopped 1 out of 4? Think about it what you are actually looking at.

thinkvision28-tegrak1-ces2014,L-P-418093-22.jpg


NVIDIA has also gone on the record (in an interview posted online with Tony Tamasi) that GPU clock operating frequencies in TK1 will be much higher than T4.
Only in Shield did the GPU clock at 675MHz; all other passively cooled devices had a much lower GPU frequency than that.

What this means is that GFXBench measured fill rate may not be the best way to extrapolate for frequency (and in fact, if you look at the offscreen fillrate results, you can see that the numbers started at ~ 3700 Mtexels/s and are now up to ~ 4600 Mtexels/s, which is already a pretty wide range). Hopefully we will find out next month when some people start to get their hands on Mi Pad.
Scores in Gfxbench have the idiotic tendency to fluctuate and that for all test results; onscreen they're pretty steady http://gfxbench.com/subtest_results_of_device.jsp?D=Lenovo+ThinkVision+28&id=554&benchmark=gfx30

Wait a minute: if I'm right and the MiPad GPU is actually clocked at =/>650MHz then it actually means that the design itself has even higher potential than many of you actually believed. Don't tell me that you'd rather prefer the MiPad to yield 30 fps at 850 instead of 650MHz ROFL :LOL:
 
Back
Top