NVIDIA Tegra Architecture

Discussion in 'Mobile Graphics Architectures and IP' started by french toast, Jan 17, 2012.

Tags:
  1. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    I dunno if it's for power reasons as well but I think they need to wait a bit for economical reasons as the margins aren't great on these at the best of times, and are obviously much worse at the start on expensive wafers and bad yields.
     
  2. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    Based on Antutu scores of ~ 41,000 and GFXBench 2.7 Egypt HD (Offscreen 1080p) scores of ~ 63fps, looks like the T4 variant in final Shield hardware is ~ 15% faster than the reference Tegra 4 tablet.
     
  3. xpea

    Regular

    Joined:
    Jun 4, 2013
    Messages:
    551
    Likes Received:
    783
    Location:
    EU-China
    well, knowing that every Nvidia product will be "sort of" Tegra in the future (yes even GPUs), they don't have so many choice here...
     
  4. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    What do you mean?
     
  5. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    Nvidia once said something like "eventually, every gpu we make will be a Tegra" and since then people have been applying their own logic to what they meant.
     
  6. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    What NVIDIA meant is that, at some point some years in the future, each and every chip that NVIDIA makes will essentially be a Tegra chip with integrated components on die. Tegra will also essentially be the building block for NVIDIA GPU's moving forward. Note that Tegra can no longer be considered a separate business entity within NVIDIA, as the technology and resources used in other areas such as Geforce will be heavily leveraged for use in Tegra products.
     
  7. wco81

    Legend

    Joined:
    Mar 20, 2004
    Messages:
    6,920
    Likes Received:
    630
    Location:
    West Coast
    But will there be a future if they can't get Tegra on mobile devices?

    Other than the Shield?
     
  8. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    I'm not sure I see the point in putting Cortex A15/57 cores in mainstream GPUs.
     
  9. Laurent06

    Veteran

    Joined:
    Dec 14, 2007
    Messages:
    1,091
    Likes Received:
    491
    To let more of the driver-side work being done on the GPU?

    Has there been a study of how much work is done on CPU in drivers?
     
  10. fehu

    Veteran

    Joined:
    Nov 15, 2006
    Messages:
    2,067
    Likes Received:
    992
    Location:
    Somewhere over the ocean
    How can an A15 be better than a core i7?
    I can see latency, and a little more consistent performance with lower cpus, but then what?
     
  11. Laurent06

    Veteran

    Joined:
    Dec 14, 2007
    Messages:
    1,091
    Likes Received:
    491
    It can be better if it lets an i7 core alone for other stuff ;)
     
  12. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    9,044
    Likes Received:
    1,117
    Location:
    WI, USA
    GPUs already have a command processor that does some of what you are thinking of.
     
  13. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    Seems like a big win for GPGPU setups that currently need a motherboard with CPU, RAM, and so on. That adds a lot to cost and area. If you could had a usable CPU on the GPU it could run totally standalone for any tasks that don't require heavy CPU support.

    For discrete GPUs there's less of a point, I don't know if we'll really see this happen.

    Anyway I'm pretty sure this will happen first with Project Denver cores, not A15s or A57s.
     
  14. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    I certainly see the appeal for HPC, but the current trend for NVIDIA seems to be minimizing the amount of GPGPU-specific logic in mainstream gaming GPUs, as illustrated by the divide between GK104 and GK110. So it wouldn't seem consistent to start putting CPU cores (A57, Denver or otherwise) into every notebook GPU.
     
  15. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    I agree. It would also make even less sense to make every thing they sell have camera controllers and the usual I/O peripherals, along with big fat PCI-e interfaces. I think the whole "everything will be Tegra" comment might be overstated.
     
  16. Laurent06

    Veteran

    Joined:
    Dec 14, 2007
    Messages:
    1,091
    Likes Received:
    491
    My experience is that drivers eat a non negligible amount of CPU. As an example, threading the issuing of OpenGL commands in Wine almost doubles frame rate of WoW (this experiment was done before nvidia started threading its Linux drivers). Of course that's a single admittedly odd point of measure :)
     
  17. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    A counter-argument is they did put all the GPGPU stuff in the GK208 ;). It's a lowest end Kepler chip with all the GK110 stuff except fast DP and ECC, with L2 cache at 512K on par with GK104 and bigger than GK107 and GK106.

    On Denver CPU cores though, I believed too that all geforce/tesla would include them but someone pointed out on another thread I was wrong to expect that, at least on Maxwell. Maxwell GPUs won't get CPU cores, not even Tesla - what's possible is to see rack units with separate ARM CPUs and dedicated Teslas, at least at first, in situations where the lack of CPU horsepower doesn't impede the computing.

    The misconception comes from vague statements like "same architecture from cell phones to supercomputers", constating nvidia may want to have APUs as the end game like AMD and possibly Intel, and inferring everything will become a Tegra of sorts.

    Maybe Volta doesn't have CPU cores, and the (placeholder) "Echelon" floor plan (Einstein GPU) did not show them either. In an undeterminated future though, it's possible or likely to see CPU cores in all GPUs I think.
     
  18. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    In GK208, they basically put in all the GPGPU stuff that was cheap enough. I don't think A57/Denver-class cores would qualify.

    And even if NVIDIA decided to add CPU cores to every GPU, that would still be a far cry from a real Tegra SoC with all the I/O included, radio, etc.
     
  19. Lazy8s

    Veteran

    Joined:
    Oct 3, 2002
    Messages:
    3,100
    Likes Received:
    19
    Maybe the comment about Tegra was referring to the change to energy efficiency as the priority in architecture design for even high end GPUs in the future.

    Anyway, Denver-based CPU cores will likely be able to scale to relatively small die sizes, so I don't see that being an issue for inclusion with a discrete GPU.
     
  20. wco81

    Legend

    Joined:
    Mar 20, 2004
    Messages:
    6,920
    Likes Received:
    630
    Location:
    West Coast
    Rumors of Nvidia coming out with their own tablet, aside from Project Shield.

    Last resort, putting out a product which would compete with products from prospective customers?

    OTOH, they may be working on the second Surface RT.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...