well, knowing that every Nvidia product will be "sort of" Tegra in the future (yes even GPUs), they don't have so many choice here...Tegra hasn't turned a profit yet, so basically, yes. Of course, NVIDIA is betting that Tegra will eventually grow enough to be profitable.
But if it doesn't happen within a couple of generations, I guess they may reconsider. Bleeding money is only fun for so long.
well, knowing that every Nvidia product will be "sort of" Tegra in the future (yes even GPUs), they don't have so many choice here...
What NVIDIA meant is that, at some point some years in the future, each and every chip that NVIDIA makes will essentially be a Tegra chip with integrated components on die. Tegra will also essentially be the building block for NVIDIA GPU's moving forward. Note that Tegra can no longer be considered a separate business entity within NVIDIA, as the technology and resources used in other areas such as Geforce will be heavily leveraged for use in Tegra products.
To let more of the driver-side work being done on the GPU?I'm not sure I see the point in putting Cortex A15/57 cores in mainstream GPUs.
It can be better if it lets an i7 core alone for other stuffHow can an A15 be better than a core i7?
I can see latency, and a little more consistent performance with lower cpus, but then what?
I'm not sure I see the point in putting Cortex A15/57 cores in mainstream GPUs.
Seems like a big win for GPGPU setups that currently need a motherboard with CPU, RAM, and so on. That adds a lot to cost and area. If you could had a usable CPU on the GPU it could run totally standalone for any tasks that don't require heavy CPU support.
For discrete GPUs there's less of a point, I don't know if we'll really see this happen.
Anyway I'm pretty sure this will happen first with Project Denver cores, not A15s or A57s.
I certainly see the appeal for HPC, but the current trend for NVIDIA seems to be minimizing the amount of GPGPU-specific logic in mainstream gaming GPUs, as illustrated by the divide between GK104 and GK110. So it wouldn't seem consistent to start putting CPU cores (A57, Denver or otherwise) into every notebook GPU.
My experience is that drivers eat a non negligible amount of CPU. As an example, threading the issuing of OpenGL commands in Wine almost doubles frame rate of WoW (this experiment was done before nvidia started threading its Linux drivers). Of course that's a single admittedly odd point of measureGPUs already have a command processor that does some of what you are thinking of.
I certainly see the appeal for HPC, but the current trend for NVIDIA seems to be minimizing the amount of GPGPU-specific logic in mainstream gaming GPUs, as illustrated by the divide between GK104 and GK110. So it wouldn't seem consistent to start putting CPU cores (A57, Denver or otherwise) into every notebook GPU.
A counter-argument is they did put all the GPGPU stuff in the GK208 . It's a lowest end Kepler chip with all the GK110 stuff except fast DP and ECC, with L2 cache at 512K on par with GK104 and bigger than GK107 and GK106.
On Denver CPU cores though, I believed too that all geforce/tesla would include them but someone pointed out on another thread I was wrong to expect that, at least on Maxwell. Maxwell GPUs won't get CPU cores, not even Tesla - what's possible is to see rack units with separate ARM CPUs and dedicated Teslas, at least at first, in situations where the lack of CPU horsepower doesn't impede the computing.
The misconception comes from vague statements like "same architecture from cell phones to supercomputers", constating nvidia may want to have APUs as the end game like AMD and possibly Intel, and inferring everything will become a Tegra of sorts.
Maybe Volta doesn't have CPU cores, and the (placeholder) "Echelon" floor plan (Einstein GPU) did not show them either. In an undeterminated future though, it's possible or likely to see CPU cores in all GPUs I think.