NVIDIA Tegra Architecture

Discussion in 'Mobile Graphics Architectures and IP' started by french toast, Jan 17, 2012.

Tags:
  1. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,446
    Likes Received:
    181
    Location:
    Chania
    Things like Android L reference platform and/or Maxwell GPU are far better selling points than an idiotic Antutu score.
     
  2. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    Agreed. I am personally far more intrigued with the fully custom Denver core than with the stock ARM core. As long as NVIDIA can get Denver + Maxwell into Tegra and shipping by holiday season 2015, then I will be reasonably pleased.
     
    #2962 ams, Sep 30, 2014
    Last edited by a moderator: Sep 30, 2014
  3. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,712
    Likes Received:
    131
    Considering the time in which it was under development, perhaps it was a way of simply hedging their bets. From what we know now, it looks like Denver is just fine, but perhaps Nvidia preferred to also have an A57 solution in case things did not go as expected.
     
  4. Lazy8s

    Veteran

    Joined:
    Oct 3, 2002
    Messages:
    3,100
    Likes Received:
    18
    Qualcomm's involvement in the development of cellular standards has certainly been massively advantageous and led to their dominant position in the smartphone SoC space, even as their application processor performance was mediocre relative to the competition only a few years back (they're obviously very competitive in CPU and GPU performance these days, though).

    While they're a huge barrier to widespread success in the smartphone space for competitors like Intel and nVidia, there has been and still is a market for a non-Qualcomm-all-in-one SoC for flagship smartphones for the US and western markets. Processing performance and time to market are important enough factors for a flagship smartphone that phone makers have been and are willing to go with discrete application processor and baseband/modem parts. nVidia even won a few of those slots in the past with phones like the Optimus 2X and some versions of the HTC One. The fact that no phone maker is trying that with K1, along with K1's results in battery life and thermal tests, gives a clear indication that the architecture as its been implemented is not compelling to OEMs this time.
     
  5. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    That is not really true. For flagship smartphone SoC's in North America, the market is dominated by Apple (who make their own SoC's using a separate Qualcomm baseband modem), Samsung (who make their own SoC's using a separate Qualcomm baseband modem or use Qualcomm SoC's), and Qualcomm.

    Tegra 3 was used only in the HTC One X international variant (the USA variant used a Qualcomm SoC).

    Tegra 4 was used only in the Xiaomi Mi 3 China Mobile variant (the other Chinese and worldwide variants used a Qualcomm SoC)

    So again, Qualcomm's baseband modem strength is pretty obvious even in these cases.

    That is not a logical conclusion. Tegra K1's GPU is very power efficient and the power consumption and performance can be easily dialed down to fit within the power envelope of a smartphone ( http://media.bestofmicro.com/tegra-k1-kepler,F-Z-416591-22.jpg ). So once again, the processor hardware is not the issue here, but Qualcomm's baseband modem strength most certainly is.

    FWIW, I do expect to see TK1 Denver inside some high end smartphone(s) when all is said and done.
     
    #2965 ams, Oct 1, 2014
    Last edited by a moderator: Oct 1, 2014
  6. Lazy8s

    Veteran

    Joined:
    Oct 3, 2002
    Messages:
    3,100
    Likes Received:
    18
    nVidia has claimed some kind of superior perfromance efficiency in their technical marketing for every generation of Tegra since the second, yet the majority of criteria and benchmarks used by the industry to evaluate that hasn't agreed. I'm not convinced by their claims.
     
  7. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    17,081
    Likes Received:
    6,416
    And again, that is only what Nvidia says. It may or may not reflect reality across a broad operating spectrum. To blindly go by that slide is to be naive in the extreme.

    When it finally hits the wild in shipping products, we'll get a chance to see if the claims hold up. Until then, no definitive statements can be made about how efficient it is or how well it performs within any given power envelope.

    Regards,
    SB
     
  8. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    647
    Likes Received:
    94
    Agreed..they seem to be targeting tablets with the ARM variants and personally I think the main purpose of Denver is to cater to the server market (along with eventual integration in Geforce/Quadro/Tesla). Everyone is scrambling to get a foothold in this segment as it seems to have huge potential and I think NV has a good chance as they have excellent GPU compute performance and programming tools/support to offer as well.
    The Denver Soc may be on a different design cycle(~6 months behind) compared to the standard ARM variant. Could be a case of hedging their bets in case of any delays to Denver (the timescales involved here are pretty large..the design process for these chips was started sometime in 2013). And possibly big.LITTLE could be more power efficient than having only Denver cores..which matters a lot more in the mobile/tablet segment (see above on my views of where Denver is targeted) And the marketing aspect of a "octa core" is very real. Even Qualcomm, who had were very opposed to it, are now offering a Octa core A53 SoC for this very reason (I'd rather have two higher clocked A57s or a 2 A57 + 2 A53 config for this segment, but I doubt we will ever see such an SoC ever made).
    You greatly overestimate the intelligence of the average consumer.
     
    #2968 Erinyes, Oct 1, 2014
    Last edited by a moderator: Oct 1, 2014
  9. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    922
    Likes Received:
    1
    Location:
    Germany
    Mediatek startet with lowcost/lowend SoC, but I wouldn't call the latest SoC from them "lowend" anymore. More like highend. At least the CPU's are highend and 64bit CPU (A53 and A57) are around the corner. Only the GPUs within the SoC are still not highend but midrange only.

    http://en.wikipedia.org/wiki/MediaTek
     
  10. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    922
    Likes Received:
    1
    Location:
    Germany
    I thought that they have given up on servers too?
     
  11. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,446
    Likes Received:
    181
    Location:
    Chania
    Let's see first if and where they end up using Denver cores there and not ARM cores instead.

    Frankly big.LITTLE instead of 4+1 surprises me more than the move to standard ARM cores (even if it's for just one of two possible Erista variants).

    Only of indirect relevance but the Mediatek MT8135 has a 2*A15 + 2*A7 config; IMHO always of course: the cases where you'd need more than 2 low power threads are relatively rare still these days. However when you do it's better to have 4 low end cores than two. The best big.LITTLE config for me would be 2*A57 + 4*A53. A53 cores are so dirt cheap that the added die are for two more of those cores is not worth mentioning.

    Wouldn't you say that you underestimate the power of marketing?
     
  12. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    It is not a claim, it is a real measurement of power consumption at the voltage rails by the application processor + mem.
     
  13. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,446
    Likes Received:
    181
    Location:
    Chania
    Which anyone is entitled to doubt for any IHV, since it's a measurement that doesn't come from an independent source.
     
  14. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    647
    Likes Received:
    94
    While not low end, I wouldn't exactly call them high end either. Their A57 SoC is still on 28nm (not even sure if its 28HPM though it should be) and will not match any of the others on performance and/or power. And as pointed out by you..the GPU is decidedly mid-range. Their ISP is also usually not as good as Qualcomm/Samsung. Anyway they make most of their money in the lower mid-range market and I dont see this changing anytime soon. AFAIK MT 6582 and 6589 are their highest selling chips.
    Given up? They haven't even started yet.
    Off the top of my head, big.LITTLE A53 config is likely more power efficient than a A57 companion core. big.LITTLE also allows higher ultimate performance as all cores can be active at once if needed. Die area is lower as 4 A53's are smaller than one A57, though this is very minor and is probably of no significance.
    How many MT8135 have they sold in comparison to MT6582, 6589 and 6592? (Heck I cant even name one shipping product with the MT8135)
    True..the area cost is so little these days you can pack pretty much all the low power cores you want. The Snapdragon 808 seems to be made right for you!

    Though I should add, I have yet to see any SoC, whether it be A7 based or A15+A7 based, match the efficiency of the Snapdragon 800. Clearly Qualcomm's approach is working better right now.
    Not at all..on the contrary..marketing takes advantage of my statement all the time! :wink:
     
  15. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,446
    Likes Received:
    181
    Location:
    Chania
    I thought the 8135 (which is a tablet only SoC) was "dead in the water" however I just yesterday noticed that it made it into two of Amazon's new low end tablets (7 & 6").

    So did Xiaomi sell 50K MiPads in a few hours because of anything else but "Keplar" inside? :razz:
     
  16. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    922
    Likes Received:
    1
    Location:
    Germany
    If you look at the latest "highend" china-phones they start to use the new MT6595 a lot. Maybe the Antutu benchmark results of >45000 are enough for them.


    yes, it seems nvidia didn't want to use Denver for servers. So they have the SoC but now instead of pushing it they stopped. This seems to imply that they have given up on servers.
     
  17. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
  18. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    Tom's Hardware recently reviewed Shield tablet, and here is what they had to say about battery life in GFXBench 3.0 T-Rex Onscreen test:

    "Leaving all four CPU cores active at their top frequency, I set the frame rate limit to 30 and reran GFXBench 3.0. With about half as many frames to draw, battery life improved from 138 minutes (2.3 hours) to 315 minutes (5.25 hours), a 2.28x increase with playable performance".
     
    #2978 ams, Oct 5, 2014
    Last edited by a moderator: Oct 5, 2014
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,601
    Likes Received:
    643
    Location:
    New York
    The shield is a very good tablet but unless it gets into retail stores it won't sell as well as it should.
     
  20. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    Looking at Tegra Erista, we already know that single precision GFLOPS per watt is ~ 1.625x higher with Tegra M1 than with Tegra K1. So let's break down the graphics performance improvement in the Maxwell-powered Tegra M1 vs. the Kepler-powered Tegra K1:

    Number of CUDA cores
    Tegra K1: 192
    Tegra M1: 256
    Performance improvement with Tegra M1 Erista: 1.333x

    Performance per CUDA core
    Tegra K1: 1.0x
    Tegra M1: 1.4x
    Performance improvement with Tegra M1 Erista: 1.400x

    Peak GPU clock operating frequency
    Tegra K1: 852MHz
    Tegra M1: 1038MHz
    Performance improvement with Tegra M1 Erista: 1.218x

    Overall graphics performance improvement with Tegra M1 Erista vs Tegra K1 Logan at the same power envelope: 2.27x !!!
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...