NVIDIA Tegra Architecture

Discussion in 'Mobile Graphics Architectures and IP' started by french toast, Jan 17, 2012.

Tags:
  1. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
  2. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    Might as well have been JHH holding up the laptop saying "This puppy is Logan".

    As Anand said, he's not about to make any judgements on that as he doesn't know what else is coming off the iPad's "gpu rail".
     
  3. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    376
    Likes Received:
    330
    yeah same Anand who has no problem to write a multi page article on Atom great power consumption vs competition with same kind of setup provided by intel :roll:
     
  4. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    While Anand is undoubtedly on Intel's payroll, that doesn't negate his point. We don't know what else Apple has coming off the iPad's gpu rail.

    Of course you're welcome to believe everything Nvidia tells you. It's not like they have a history of bullshit or anything like that is it? :roll:
     
  5. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    Look, this isn't rocket science. Clearly NVIDIA spent some time and did hardware analysis and application data analysis to isolate the GPU power rail in the ipad from the CPU and memory power rails. Anand and Ryan may not have been there for that explanation. Yup, this puppy is Logan, and it looks really good so far.
     
  6. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    995
    Likes Received:
    162
    Location:
    Luxembourg
    I keep hearing this, but it makes no sense from a DVFS perspective. You don't attach something else to a differential voltage rail other than what you're meant to control with it. And frankly I don't see anything that would be in synchronicity with GPU voltage.
     
  7. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    It may well be the case, however when both Anand and Ryan Smith basically don't believe it that tells it's own story.

    I remember how good Tegra 3 and 4 "looked" and I guess they do also.
     
  8. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    499
    Likes Received:
    177
    Remember the story Ryan and Anand did last year comparing Tegra, Snapdragon, and Atom CPU and GPU power by measuring currents on power rails?

    http://www.anandtech.com/show/6529/busting-the-x86-power-myth-indepth-clover-trail-power-analysis/8

    Although they never retracted this article, I think behind the scenes they took some flack from SoC vendors because it's pretty hard to isolate power as they tried to do in this article, (and as NVIDIA is trying to do in the demo). You just don't know what's hooked up to the power rail along with the CPU or GPU that you're measuring, and every SoC does things differently.

    I wouldn't pay much attention to the iPad power numbers, since they're guesswork at best, and also because NVIDIA compares a product shipping for the past year to an unreleased sample fresh from the fab and not due to be sold as a product for quite some time yet.

    Still, the absolute power numbers from the Logan demo show that Kepler can fit into mobile power envelopes just fine, contrary to the expectations of many on this board - and that's the takeaway from this demo.
     
  9. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    376
    Likes Received:
    330
    No, I'm not an easy believer, but between Nvidia and Intel, the latter has very long history of marketing PR bullshits and benchmarks cheats (compiler against AMD just to name one, and the most hilarous, haswell GPU as fast as GT650M :lol: ).
    back to Logan vs A6X, I think data presented are mostly accurate but it's a marketing pitch, so they show you only what they want to show you, focusing in the best selling point. Obviously the global story is not so rose, but we will only know it when it will be available to independent reviewers...
     
  10. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    Stop twisting their words. Anand and Ryan at first had a hard time believing what they were seeing (ie. they didn't expect this level of GPU performance at less than 1w power consumption). Never in their wildest dreams could they imagine that Kepler.M would be so power efficient. But the data is what it is, and they are believers now.

    Guesswork at best? I doubt that. It is very evident from the video that NVIDIA took pains to isolate GPU/CPU/mem power rails through hardware analysis and directed application data analysis. This is the real deal. Now, obviously future ipads will be more power efficient than ipad 4, but who in the world would have expected 3x better GPU power efficiency for Kepler.M vs. A6X (in addition to 4-5x higher peak performance headroom too).
     
  11. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    Source? All I see is the following quotes - http://www.anandtech.com/show/7169/nvidia-demonstrates-logan-soc-mobile-kepler/2

    Anand could barely be flying his flag of disbelief any higher without seriously affecting his relationship with Nvidia.
     
  12. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    LOL, you are a piece of work. Rather than taking words out of context from Anand, how about quoting full sentences:

    It is pretty obvious what Anand and Ryan think about Logan, so stop playing dumb.
     
  13. Lazy8s

    Veteran

    Joined:
    Oct 3, 2002
    Messages:
    3,100
    Likes Received:
    18
    I thought this had already been discussed. I can think of a part or two of a SoC that might also be on its GPU power rail; it might be logical as the system bus is sometimes run at the same clock speed. However, it's beside the point, especially as the power consumed in that test scenario is likely to be predominantly GPU related.

    The 2.6 W on average drawn by the A6X power rail at full load is a reasonable amount to draw, all things considered. Logan is able to look so much better for a number of reasons stated before, not the least of which is the proportionately lower power consumed by running at a low (by its performance standard) voltage and frequency.

    nVidia hasn't fared well against PowerVR products in comparisons of comparable parts in the past. We'll have to see if that changes for this mobile Kepler versus a Rogue-based A7X.
     
  14. Brilliantdeve

    Newcomer

    Joined:
    Aug 12, 2013
    Messages:
    27
    Likes Received:
    0
    I think nvidia Project Logan should be compare to Apple A8X .. sinice nvidia won't be able to deliver it until 2H 2014 :)
     
  15. Lazy8s

    Veteran

    Joined:
    Oct 3, 2002
    Messages:
    3,100
    Likes Received:
    18
    That's a good point. Logan may fall closer to the next next generation.
     
  16. elroy

    Regular

    Joined:
    Jan 29, 2003
    Messages:
    269
    Likes Received:
    1
    I thought Logan parts were hitting the market Q1 2014?

    EDIT: Anand says H1 2014.
     
  17. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    Well the writing is already on the wall. It is pretty obvious that past comparisons need not and should not apply when comparing to Kepler.M . In fact, it appears that Kepler.M will leapfrog past other ultra low power GPU's with respect to performance per watt, peak performance, and feature set.

    No, all signs point to Kepler.M coming to market in 1H 2014 (in fact, there is even a reasonably good chance that products will come to market by end of Q1 2014).
     
  18. Lazy8s

    Veteran

    Joined:
    Oct 3, 2002
    Messages:
    3,100
    Likes Received:
    18
    The 600+ MHz G6400 that ST-Ericsson was targeting in its A9600 indicates that PowerVR configurations could potentially deliver around 200 GFLOPS into an upcoming smartphone solution.

    If the ratio of delivered performance to FLOPs of the purported MT8135 GLBench 2.5 score is real and at all representative of Rogue in contemporary real-world workloads, a little extrapolation of that data combined with an understanding of the difference between what nVidia is demoing and what they're actually implying they'll deliver into a comparable product leads me to believe that this Kepler will have a hard time just matching PowerVR, let alone leap-frogging it.
     
    #1518 Lazy8s, Aug 17, 2013
    Last edited by a moderator: Aug 17, 2013
  19. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    376
    Likes Received:
    330
    It's too early to judge performance but one thing is sure, this time, and for the very first time in Tegra history, Nvidia, with OpenGL 4.3 / DX11 / CUDA 3.5 support, will be beyond the competition on feature set.
    I must say finally ! even if it took too long from the market GPU leader
     
    #1519 xpea, Aug 17, 2013
    Last edited by a moderator: Aug 17, 2013
  20. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    Just like all the Tegras before it. :lol:

    I don't recall Nvidia saying Shield would be on a 6-month refresh cycle?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...