NVIDIA Tegra Architecture

So does anyone know why the T4-powered Tegra Note 7 has up to 30% higher CPU perf. than Shield?

http://images.anandtech.com/graphs/graph7508/60001.png

http://images.anandtech.com/graphs/graph7508/60003.png

I wonder if this device is using a newer revision of the A15 core...?

They both use the r2p2 version of the A15.

My Tegra Note 7 scores even higher using the latest version of Chrome Beta, this is definitely due to browser updates. Notice on the Anandtech review that only the Kraken benchmark is prefixed with "Stock Browser", they're most likely using Chrome for the other tests, which would have received 1 or 2 updates since the Shield review.
 
Would Google be willing downgrade the GPU featureset of all Nexus devices launched in the last 1.5 years (or ~2 years by the time the new Nexus 10 comes out)?
Tegra 4 is OpenCL-less and only supports OpenGL 2.0..
Are you talking about a so broken, buggy and useless OpenGL ES 3.0 implementation that any real app is available ? with zero level support, ie devs posting bug reports on official QC forum without any reply since July ?
You know what ? I think T4 will be a HUUUUUGE step up over this Adreno 330 check mark GPU...

edit: that being said, I think TK1 will be a much better SoC for 2014 Nexus 10
 
Are you talking about a so broken, buggy and useless OpenGL ES 3.0 implementation that any real app is available ? with zero level support, ie devs posting bug reports on official QC forum without any reply since July ?
You know what ? I think T4 will be a HUUUUUGE step up over this Adreno 330 check mark GPU...

What's embarassing is that the Nexus 10 doesn't even contain an Adreno330 but a Mali T604 from ARM. :devilish:
 
What's embarassing is that the Nexus 10 doesn't even contain an Adreno330 but a Mali T604 from ARM. :devilish:

Yap.

I wonder, is the Mali T604 support for OpenGL ES 3.0 also "broken, buggy and useless"?


Nonetheless, Nexus devices are often seen as the most future-proof Android phones/tablets available.
Even if KitKat uses no OpenCL or OpenGL ES 3.0, there's a chance that one of them might be used/required for some features in future Android releases (image editing in the GPU, for example).

It would be pretty embarassing to present a brand new feature for Android 5.x that worked in Nexus 4, 5, 7 (2013), 10 (2013) but not in Nexus 10 2014.
Sorry guys, this only works with last year's model.
 
What's embarassing is that the Nexus 10 doesn't even contain an Adreno330 but a Mali T604 from ARM. :devilish:
hmmm it's not embarrassing at all, I was talking about S800 because apparently the choice for this 2014 Nexus 10 is between S800 and T4...
 
Yap.

I wonder, is the Mali T604 support for OpenGL ES 3.0 also "broken, buggy and useless"?

ES3.0 support was always a question mark in the beginning for the 604 since it was never clear in ARM's own material. I don't have a clear picture on it even today, however since the hw is for sure capable of it, it would be absurd if by now they don't have ES3.0 support for it.

Nonetheless, Nexus devices are often seen as the most future-proof Android phones/tablets available.
Even if KitKat uses no OpenCL or OpenGL ES 3.0, there's a chance that one of them might be used/required for some features in future Android releases (image editing in the GPU, for example).

It would be pretty embarassing to present a brand new feature for Android 5.x that worked in Nexus 4, 5, 7 (2013), 10 (2013) but not in Nexus 10 2014.
Sorry guys, this only works with last year's model.

It's my understanding that Google picks the best hw according to their standards which is ready up to a given point of time. I've also read somewhere that we might see Intel in one of the Google Nexus reference devices but I can't tell whether there's anything behind it or not and no I don't see why Intel shouldn't win a spot there also for a change if the hw is deemed good enough by Google.

No idea who it's going to be, but you have to remember that exactly because Google negotiates with a number of contenders for any given reference device you'll see many if not all of them appear as the "winner" for it. Newsblurbs that refer to A, B or C IHV might not be necessarily wrong if there have been negotiations and someone thinks its a done deal.
 
hmmm it's not embarrassing at all, I was talking about S800 because apparently the choice for this 2014 Nexus 10 is between S800 and T4...

The debate was about the Nexus10 though and not the Nexus7. Completely different devices and completely different price points.

Apart from that I think both Google and Amazon had their own respective reasons for chosing the S800 out of a sizeable number of competing SoCs.
 
like exophase said, I was talking in terms of OpenGL ES 3.0 drivers quality.
one interesting link:
https://dolphin-emu.org/blog/2013/09/26/dolphin-emulator-and-opengl-drivers-hall-fameshame/

Oh please spare me with that link. It's been used in about EVERY forum in defense of NV's driver superiority and it's one single out case for a darned emulator where they even admit that they haven't even had at the time a Tegra4 to test it on for ES3.0.

Given that our experience with mobile drivers has been… less than good (this article will talk about that in details later), we are really curious to see how good NVIDIA drivers for the Tegra 4 SoC are. We couldn’t get hold of a device powered by Tegra 4 yet (they are expensive!), but we are really curious to see how Dolphin would perform on them, given that the bottleneck for Dolphin on Android is now mostly GPUs and their drivers.
So no update ever since obviously and the garage team with the funky emulator is lost in oblivion but NV drivers are still great no matter what irrelevant if you've tried them or not.

Would any of you mind for a change getting a wee bit more original material? I actually wonder if there's some sort of manual in order to defend NV, like for excellent driver support use this linkadink. Users are dumb they won't notice they haven't used a T4 based device.
 
Oh please spare me with that link. It's been used in about EVERY forum in defense of NV's driver superiority and it's one single out case for a darned emulator where they even admit that they haven't even had at the time a Tegra4 to test it on for ES3.0.

How could they? Tegra 4 only supports OpenGL ES 2.0.
Which is why I don't see the point of mentioning Tegra 4 in that Dolphin comparison.

They said nVidia was the best, yet at the time nVidia didn't have a SoC that could run Dolphin at all..
 
How could they? Tegra 4 only supports OpenGL ES 2.0.
Which is why I don't see the point of mentioning Tegra 4 in that Dolphin comparison.

They said nVidia was the best, yet at the time nVidia didn't have a SoC that could run Dolphin at all..

In all fairness NV could support the majority of ES3.0 via extensions. Else if they would have had a T4 device they might had been able to test its ES3.0 driver if it existed at the time. IMG apparently didn't have ES3.0 drivers either ready at the time yet their space was left essentially empty as an unknown. It should had been the same empty space/question mark for NV too lack of real ES3.0 relevant first hand experience.
 
Regarding the CPU perf. and power efficiency of R3 Cortex A15 in TK1, take a close look at this chart: http://images.anandtech.com/doci/7622/Screen Shot 2014-01-06 at 6.18.12 AM.png

Note that for a SPECInt score of ~ 950 per core (which is close to what most T4-powered devices would achieve as a maximum per core), the SPECInt perf. per watt in TK1 is at least 2x greater in comparison (due to the combination of three factors: process, experiential, and architectural improvements). This massive perf. per watt advantage is maintained at most points along the curve.

Now, most TK1-powered devices will have a SPECInt score of ~ 1300 per core maximum. So, based on the graph, the peak power consumption per CPU core will be ~ 50% less for TK1 than for T4 (even though peak performance is nearly 40% higher in comparison)!

A device like Shield v2 will have a SPECInt score of ~ 1400 per core maximum (ie. the full 2.3GHz CPU clock operating frequency). Shield v2 will also have ~ 365 GFLOPS GPU throughput (ie. the full 951MHz GPU clock operating frequency).
 
http://www.tomshardware.com/news/lenovo-thinkvision-28-nvidia-tegra-k1-android,25733.html

Some bench tests on the K1.

About 75% faster in GL2.7 than A7

About 30% faster than the A7 in the iphone on the graphics test of 3dMark.
I think anand has said that the graphics subsection of of 3dmark has some CPU element in it. Taking that into account, the 3dmark results are disappointing.

the toms piece says it was running 15% below max clock.

Now, I wonder how much power that soc takes running those benches.
 
Regarding the CPU perf. and power efficiency of R3 Cortex A15 in TK1, take a close look at this chart: http://images.anandtech.com/doci/7622/Screen Shot 2014-01-06 at 6.18.12 AM.png

Note that for a SPECInt score of ~ 950 per core (which is close to what most T4-powered devices would achieve as a maximum per core), the SPECInt perf. per watt in TK1 is at least 2x greater in comparison (due to the combination of three factors: process, experiential, and architectural improvements). This massive perf. per watt advantage is maintained at most points along the curve.

Now, most TK1-powered devices will have a SPECInt score of ~ 1300 per core maximum. So, based on the graph, the peak power consumption per CPU core will be ~ 50% less for TK1 than for T4 (even though peak performance is nearly 40% higher in comparison)!

Given that's an NVIDIA slide I preserve judgement until I see independent 3rd party results first. What was that again in their K1 whitepaper that there's a +/-20% margin of error in some numbers? I wonder if that's a repeating trend or just a "singled" out case.

A device like Shield v2 will have a SPECInt score of ~ 1400 per core maximum (ie. the full 2.3GHz CPU clock operating frequency). Shield v2 will also have ~ 365 GFLOPS GPU throughput (ie. the full 951MHz GPU clock operating frequency).

...and excellent active cooling? :LOL:
 
Back
Top