NVIDIA Tegra Architecture

I dunno if it's for power reasons as well but I think they need to wait a bit for economical reasons as the margins aren't great on these at the best of times, and are obviously much worse at the start on expensive wafers and bad yields.
 
Based on Antutu scores of ~ 41,000 and GFXBench 2.7 Egypt HD (Offscreen 1080p) scores of ~ 63fps, looks like the T4 variant in final Shield hardware is ~ 15% faster than the reference Tegra 4 tablet.
 
Tegra hasn't turned a profit yet, so basically, yes. Of course, NVIDIA is betting that Tegra will eventually grow enough to be profitable.

But if it doesn't happen within a couple of generations, I guess they may reconsider. Bleeding money is only fun for so long.
well, knowing that every Nvidia product will be "sort of" Tegra in the future (yes even GPUs), they don't have so many choice here...
 
Nvidia once said something like "eventually, every gpu we make will be a Tegra" and since then people have been applying their own logic to what they meant.
 
What NVIDIA meant is that, at some point some years in the future, each and every chip that NVIDIA makes will essentially be a Tegra chip with integrated components on die. Tegra will also essentially be the building block for NVIDIA GPU's moving forward. Note that Tegra can no longer be considered a separate business entity within NVIDIA, as the technology and resources used in other areas such as Geforce will be heavily leveraged for use in Tegra products.
 
What NVIDIA meant is that, at some point some years in the future, each and every chip that NVIDIA makes will essentially be a Tegra chip with integrated components on die. Tegra will also essentially be the building block for NVIDIA GPU's moving forward. Note that Tegra can no longer be considered a separate business entity within NVIDIA, as the technology and resources used in other areas such as Geforce will be heavily leveraged for use in Tegra products.

I'm not sure I see the point in putting Cortex A15/57 cores in mainstream GPUs.
 
How can an A15 be better than a core i7?
I can see latency, and a little more consistent performance with lower cpus, but then what?
 
I'm not sure I see the point in putting Cortex A15/57 cores in mainstream GPUs.

Seems like a big win for GPGPU setups that currently need a motherboard with CPU, RAM, and so on. That adds a lot to cost and area. If you could had a usable CPU on the GPU it could run totally standalone for any tasks that don't require heavy CPU support.

For discrete GPUs there's less of a point, I don't know if we'll really see this happen.

Anyway I'm pretty sure this will happen first with Project Denver cores, not A15s or A57s.
 
Seems like a big win for GPGPU setups that currently need a motherboard with CPU, RAM, and so on. That adds a lot to cost and area. If you could had a usable CPU on the GPU it could run totally standalone for any tasks that don't require heavy CPU support.

For discrete GPUs there's less of a point, I don't know if we'll really see this happen.

Anyway I'm pretty sure this will happen first with Project Denver cores, not A15s or A57s.

I certainly see the appeal for HPC, but the current trend for NVIDIA seems to be minimizing the amount of GPGPU-specific logic in mainstream gaming GPUs, as illustrated by the divide between GK104 and GK110. So it wouldn't seem consistent to start putting CPU cores (A57, Denver or otherwise) into every notebook GPU.
 
I certainly see the appeal for HPC, but the current trend for NVIDIA seems to be minimizing the amount of GPGPU-specific logic in mainstream gaming GPUs, as illustrated by the divide between GK104 and GK110. So it wouldn't seem consistent to start putting CPU cores (A57, Denver or otherwise) into every notebook GPU.

I agree. It would also make even less sense to make every thing they sell have camera controllers and the usual I/O peripherals, along with big fat PCI-e interfaces. I think the whole "everything will be Tegra" comment might be overstated.
 
GPUs already have a command processor that does some of what you are thinking of.
My experience is that drivers eat a non negligible amount of CPU. As an example, threading the issuing of OpenGL commands in Wine almost doubles frame rate of WoW (this experiment was done before nvidia started threading its Linux drivers). Of course that's a single admittedly odd point of measure :)
 
I certainly see the appeal for HPC, but the current trend for NVIDIA seems to be minimizing the amount of GPGPU-specific logic in mainstream gaming GPUs, as illustrated by the divide between GK104 and GK110. So it wouldn't seem consistent to start putting CPU cores (A57, Denver or otherwise) into every notebook GPU.

A counter-argument is they did put all the GPGPU stuff in the GK208 ;). It's a lowest end Kepler chip with all the GK110 stuff except fast DP and ECC, with L2 cache at 512K on par with GK104 and bigger than GK107 and GK106.

On Denver CPU cores though, I believed too that all geforce/tesla would include them but someone pointed out on another thread I was wrong to expect that, at least on Maxwell. Maxwell GPUs won't get CPU cores, not even Tesla - what's possible is to see rack units with separate ARM CPUs and dedicated Teslas, at least at first, in situations where the lack of CPU horsepower doesn't impede the computing.

The misconception comes from vague statements like "same architecture from cell phones to supercomputers", constating nvidia may want to have APUs as the end game like AMD and possibly Intel, and inferring everything will become a Tegra of sorts.

Maybe Volta doesn't have CPU cores, and the (placeholder) "Echelon" floor plan (Einstein GPU) did not show them either. In an undeterminated future though, it's possible or likely to see CPU cores in all GPUs I think.
 
A counter-argument is they did put all the GPGPU stuff in the GK208 ;). It's a lowest end Kepler chip with all the GK110 stuff except fast DP and ECC, with L2 cache at 512K on par with GK104 and bigger than GK107 and GK106.

On Denver CPU cores though, I believed too that all geforce/tesla would include them but someone pointed out on another thread I was wrong to expect that, at least on Maxwell. Maxwell GPUs won't get CPU cores, not even Tesla - what's possible is to see rack units with separate ARM CPUs and dedicated Teslas, at least at first, in situations where the lack of CPU horsepower doesn't impede the computing.

The misconception comes from vague statements like "same architecture from cell phones to supercomputers", constating nvidia may want to have APUs as the end game like AMD and possibly Intel, and inferring everything will become a Tegra of sorts.

Maybe Volta doesn't have CPU cores, and the (placeholder) "Echelon" floor plan (Einstein GPU) did not show them either. In an undeterminated future though, it's possible or likely to see CPU cores in all GPUs I think.

In GK208, they basically put in all the GPGPU stuff that was cheap enough. I don't think A57/Denver-class cores would qualify.

And even if NVIDIA decided to add CPU cores to every GPU, that would still be a far cry from a real Tegra SoC with all the I/O included, radio, etc.
 
Maybe the comment about Tegra was referring to the change to energy efficiency as the priority in architecture design for even high end GPUs in the future.

Anyway, Denver-based CPU cores will likely be able to scale to relatively small die sizes, so I don't see that being an issue for inclusion with a discrete GPU.
 
Rumors of Nvidia coming out with their own tablet, aside from Project Shield.

Last resort, putting out a product which would compete with products from prospective customers?

OTOH, they may be working on the second Surface RT.
 
Back
Top