NVIDIA Tegra Architecture

Might as well have been JHH holding up the laptop saying "This puppy is Logan".

As Anand said, he's not about to make any judgements on that as he doesn't know what else is coming off the iPad's "gpu rail".
yeah same Anand who has no problem to write a multi page article on Atom great power consumption vs competition with same kind of setup provided by intel :rolleyes:
 
yeah same Anand who has no problem to write a multi page article on Atom great power consumption vs competition with same kind of setup provided by intel :rolleyes:

While Anand is undoubtedly on Intel's payroll, that doesn't negate his point. We don't know what else Apple has coming off the iPad's gpu rail.

Of course you're welcome to believe everything Nvidia tells you. It's not like they have a history of bullshit or anything like that is it? :rolleyes:
 
Look, this isn't rocket science. Clearly NVIDIA spent some time and did hardware analysis and application data analysis to isolate the GPU power rail in the ipad from the CPU and memory power rails. Anand and Ryan may not have been there for that explanation. Yup, this puppy is Logan, and it looks really good so far.
 
We don't know what else Apple has coming off the iPad's gpu rail.
I keep hearing this, but it makes no sense from a DVFS perspective. You don't attach something else to a differential voltage rail other than what you're meant to control with it. And frankly I don't see anything that would be in synchronicity with GPU voltage.
 
It may well be the case, however when both Anand and Ryan Smith basically don't believe it that tells it's own story.

I remember how good Tegra 3 and 4 "looked" and I guess they do also.
 
It may well be the case, however when both Anand and Ryan Smith basically don't believe it that tells it's own story.

I remember how good Tegra 3 and 4 "looked" and I guess they do also.

Remember the story Ryan and Anand did last year comparing Tegra, Snapdragon, and Atom CPU and GPU power by measuring currents on power rails?

http://www.anandtech.com/show/6529/busting-the-x86-power-myth-indepth-clover-trail-power-analysis/8

Although they never retracted this article, I think behind the scenes they took some flack from SoC vendors because it's pretty hard to isolate power as they tried to do in this article, (and as NVIDIA is trying to do in the demo). You just don't know what's hooked up to the power rail along with the CPU or GPU that you're measuring, and every SoC does things differently.

I wouldn't pay much attention to the iPad power numbers, since they're guesswork at best, and also because NVIDIA compares a product shipping for the past year to an unreleased sample fresh from the fab and not due to be sold as a product for quite some time yet.

Still, the absolute power numbers from the Logan demo show that Kepler can fit into mobile power envelopes just fine, contrary to the expectations of many on this board - and that's the takeaway from this demo.
 
While Anand is undoubtedly on Intel's payroll, that doesn't negate his point. We don't know what else Apple has coming off the iPad's gpu rail.

Of course you're welcome to believe everything Nvidia tells you. It's not like they have a history of bullshit or anything like that is it? :rolleyes:
No, I'm not an easy believer, but between Nvidia and Intel, the latter has very long history of marketing PR bullshits and benchmarks cheats (compiler against AMD just to name one, and the most hilarous, haswell GPU as fast as GT650M :LOL: ).
back to Logan vs A6X, I think data presented are mostly accurate but it's a marketing pitch, so they show you only what they want to show you, focusing in the best selling point. Obviously the global story is not so rose, but we will only know it when it will be available to independent reviewers...
 
It may well be the case, however when both Anand and Ryan Smith basically don't believe it that tells it's own story.

Stop twisting their words. Anand and Ryan at first had a hard time believing what they were seeing (ie. they didn't expect this level of GPU performance at less than 1w power consumption). Never in their wildest dreams could they imagine that Kepler.M would be so power efficient. But the data is what it is, and they are believers now.

RecessionCone said:
I wouldn't pay much attention to the iPad power numbers, since they're guesswork at best

Guesswork at best? I doubt that. It is very evident from the video that NVIDIA took pains to isolate GPU/CPU/mem power rails through hardware analysis and directed application data analysis. This is the real deal. Now, obviously future ipads will be more power efficient than ipad 4, but who in the world would have expected 3x better GPU power efficiency for Kepler.M vs. A6X (in addition to 4-5x higher peak performance headroom too).
 
Stop twisting their words. Anand and Ryan at first had a hard time believing what they were seeing (ie. they didn't expect this level of GPU performance at less than 1w power consumption). Never in their wildest dreams could they imagine that Kepler.M would be so power efficient. But the data is what it is, and they are believers now.

Source? All I see is the following quotes - http://www.anandtech.com/show/7169/nvidia-demonstrates-logan-soc-mobile-kepler/2

"I won't focus too much on the GPU power comparison as I don't know what else (if anything) Apple hangs off of its GPU power rail"
"If these numbers are believable"
"If NVIDIA's A6X power comparison is truly apples-to-apples"
"I showed it to Ryan Smith, our Senior GPU Editor, and even he didn't believe it."
Anand could barely be flying his flag of disbelief any higher without seriously affecting his relationship with Nvidia.
 
LOL, you are a piece of work. Rather than taking words out of context from Anand, how about quoting full sentences:

Anandtech said:
If NVIDIA's A6X power comparison is truly apples-to-apples, then it would be a huge testament to the power efficiency of NVIDIA's mobile Kepler architecture

Anandtech said:
Regardless of process tech however, Kepler's power story in ultra mobile seems great. I really didn't believe the GLBenchmark data when I first saw it. I showed it to Ryan Smith, our Senior GPU Editor, and even he didn't believe it. If NVIDIA is indeed able to get iPad 4 levels of graphics performance at less than 1W (and presumably much more performance in the 2.5 - 5W range) it looks like Kepler will do extremely well in mobile

Anandtech said:
Whatever NVIDIA's reasons for showing off Logan now, the result is something that I'm very excited about. A mobile SoC with NVIDIA's latest GPU architecture is exactly what we've been waiting for

It is pretty obvious what Anand and Ryan think about Logan, so stop playing dumb.
 
I thought this had already been discussed. I can think of a part or two of a SoC that might also be on its GPU power rail; it might be logical as the system bus is sometimes run at the same clock speed. However, it's beside the point, especially as the power consumed in that test scenario is likely to be predominantly GPU related.

The 2.6 W on average drawn by the A6X power rail at full load is a reasonable amount to draw, all things considered. Logan is able to look so much better for a number of reasons stated before, not the least of which is the proportionately lower power consumed by running at a low (by its performance standard) voltage and frequency.

nVidia hasn't fared well against PowerVR products in comparisons of comparable parts in the past. We'll have to see if that changes for this mobile Kepler versus a Rogue-based A7X.
 
nVidia hasn't fared well against PowerVR products in comparisons of comparable parts in the past. We'll have to see if that changes for this mobile Kepler versus a Rogue-based A7X.

Well the writing is already on the wall. It is pretty obvious that past comparisons need not and should not apply when comparing to Kepler.M . In fact, it appears that Kepler.M will leapfrog past other ultra low power GPU's with respect to performance per watt, peak performance, and feature set.

Brilliantdeve said:
I think nvidia Project Logan should be compare to Apple A8X .. sinice nvidia won't be able to deliver it until 2H 2014

No, all signs point to Kepler.M coming to market in 1H 2014 (in fact, there is even a reasonably good chance that products will come to market by end of Q1 2014).
 
The 600+ MHz G6400 that ST-Ericsson was targeting in its A9600 indicates that PowerVR configurations could potentially deliver around 200 GFLOPS into an upcoming smartphone solution.

If the ratio of delivered performance to FLOPs of the purported MT8135 GLBench 2.5 score is real and at all representative of Rogue in contemporary real-world workloads, a little extrapolation of that data combined with an understanding of the difference between what nVidia is demoing and what they're actually implying they'll deliver into a comparable product leads me to believe that this Kepler will have a hard time just matching PowerVR, let alone leap-frogging it.
 
Last edited by a moderator:
this Kepler will have a hard time just matching PowerVR, let alone leap-frogging it.
It's too early to judge performance but one thing is sure, this time, and for the very first time in Tegra history, Nvidia, with OpenGL 4.3 / DX11 / CUDA 3.5 support, will be beyond the competition on feature set.
I must say finally ! even if it took too long from the market GPU leader
 
Last edited by a moderator:
Well the writing is already on the wall. It is pretty obvious that past comparisons need not and should not apply when comparing to Kepler.M . In fact, it appears that Kepler.M will leapfrog past other ultra low power GPU's with respect to performance per watt, peak performance, and feature set.

Just like all the Tegras before it. :LOL:

No, all signs point to Kepler.M coming to market in 1H 2014 (in fact, there is even a reasonably good chance that products will come to market by end of Q1 2014).

I don't recall Nvidia saying Shield would be on a 6-month refresh cycle?
 
Back
Top