NVIDIA Tegra Architecture

only 20W for 2GHz quad core A57 + 1Ghz 512 FP32 GFlops GPU on 20nm is very very impressive in my book. Saying that it consumes 20W without taking into account the very high sustained performance, is hiding half of the story. Because X1 efficiency is at the top of the market and it can be adapted to a tablet easily.

Totally except not necessarily. You reckon the Tegra is as beastly when sat next to the, say, i7 5600u? That's a 15W part.
 
Perhaps, but this wouldn't change the fact that you won't see this level of performance in a tablet, as Ryan said.
on 20nm, absolutely, on 16nm I hope Parker will make it (even if I doubt Nvidia is still interested by this market)
 
Totally except not necessarily. You reckon the Tegra is as beastly when sat next to the, say, i7 5600u? That's a 15W part.

Actually, the 2 SMX Maxwell in TX1 may very well be quite a bit faster than the TDP-constrained HD6000 with 48 EUs. I doubt that GPU will rise above the 300MHz base clock very often in the 15W part. At 300MHz, we're talking ~250 GFLOPs FP32 for the Intel iGPU.
 
Totally except not necessarily. You reckon the Tegra is as beastly when sat next to the, say, i7 5600u? That's a 15W part.
i7-5500U is my daily ride on an Acer TravelMate P645-SG and this thing can't sustain full speed CPU/GPU at 15W even with a powerful fan attached to it (CPU running around 1.9GHz and GPU around 700MHz when gaming). Even grid2 type of game has sub 30fps at 1366x768 lowest setting.
again, you are comparing a part that throttles like hell to stay in his marketing TDP to the sustained full performance TDP of another part. How is fair ?
 
Actually, the 2 SMX Maxwell in TX1 may very well be quite a bit faster than the TDP-constrained HD6000 with 48 EUs. I doubt that GPU will rise above the 300MHz base clock very often in the 15W part. At 300MHz, we're talking ~250 GFLOPs FP32 for the Intel iGPU.
my i7-550U iGPU stays around 700MHz when gaming so theoretical performance should be close to TX1. When taking into account the massive Intel manufacturing advantage, their iGPU are a joke. Just imagine Parker on intel 14nm 2nd gen FinFet...
 
I don't know who cares what N would look like under X future process when NV doesn't even seem to bother for anything outside automotive these days. What's their roadmap for Tegra like after Parker anyway? Parker on steroids and Parker PRO with cheese or what?
 
I don't know who cares what N would look like under X future process when NV doesn't even seem to bother for anything outside automotive these days. What's their roadmap for Tegra like after Parker anyway? Parker on steroids and Parker PRO with cheese or what?
Like it or not, but Nvidia is here to stay and their plans become clear with today's announcement :
Today NVIDIA is expanding their GameWorks developer program to the realm of Android devices. GameWorks encompasses a range of NVIDIA technologies and tools like PhysX, VisualFX, OptiX, and the NVIDIA Core SDK which allows developers to program for NVIDIA GPUs using NVAPI instead of APIs like DirectX or OpenGL. It also includes many tools to help developers test and debug their games.

AndroidWorks aims to simplify the experience of developing games on Android. It includes a number of libraries for developers to use, along with sample code. It also includes a number of tools for profiling performance and debugging.
full article here:
http://www.anandtech.com/show/9301/nvidia-introduces-androidworks-for-android-game-developers

They clearly target Android gaming market, where their strength is the highest
 
Last edited:
Like it or not, but Nvidia is here to stay and their plans become clear with today's announcement :

full article here:
http://www.anandtech.com/show/9301/nvidia-introduces-androidworks-for-android-game-developers

They clearly target Android gaming market, where their strength is the highest

I have absolutely nothing against NV, rather the contrary. For what its worth they've been targetting that Android gaming market since the GoForce era with no markable results. We'll see how it pans out and who'll chew again on his exaggerated optimism. "Here to stay" is obviously relative to their constant single digit market share persentage.
 
Last edited:
If your info is correct and you mostly have good info, then we have a new question. What does Nvidia want to manufactur at Samsung if it's not Parker? They mentioned Samsung as manufacturer in their SEC-filing for the first time this year so i doubt it's only because of test wafers.

I have no idea. I've heard that Pascal is also on TSMC 16nm so they aren't manufacturing GPUs at Samsung either. I guess we'll have to wait and watch.
Even further: why the heck does it take so long? I personally expected Parker earlier, but shifted it in my mind to somewhere 2016 because of the supposed switch to 14FF.....(<----me confused as hell....)

Why does it take so long for what? Erista taped out about a year back (July 2014 apparently) and so far NV has typically been on a yearly cadence. From what I've heard and read though, 14/16nm design cycles (and costs) are a lot higher, so I dont think a switch so late was ever really on the cards.
Any information on the GPU? Parker was originally going to use Maxwell and I assume that's still true after its delay, but it would be interesting if it will use Pascal now.

I've heard its still Maxwell. Either ways..I dont think Pascal is going to be a huge change compared to Maxwell. Since they're already going for HBM and a new process, NV likely wanted to keep the changes simple. So Maxwell for a SoC should still be a good choice.
I don't think that the first Pascal part which has a Tapeout will be a SoC. Pretty sure it's Maxwell again. I would just bet on 3SMM instead of 2 in X1. Anyway it's unclear whether Pascal isn't just Maxwell plus HBM and we won't see HBM in mobile SoCs but Wide IO 2.

Yes..agree with your speculation..but I think they will stick to 2 SMMs. The performance is already the highest in the industry and with the extra performance allowed by the 16nm process it should be more than enough for 2016.
As Apple SoCs have proven over time, I would prefer to see Parker with 4SMM. You sacrifice a bit of silicon area but you can get better power consumption from running it at lower speed...

You have to remember that after 28nm, the cost of silicon is increasing a fair bit so its not a very viable strategy any more. 4 SMMs is overkill IMHO. I see them sticking to 2. The process advantage and whatever minor architectural gains they make should provide enough of an increase in performance.
 
If they don't increase frequency significantly for the GPU in Parker, it doesn't sound like any worth mentioning changes if they should stay with 2 SMMs. Maxwell/desktop looks more than mature to me and from what I've heard Pascal doesn't come with fundamental changes in its ALUs either compared to Maxwell. All in all if you should be right we should expect from Parker the addition of Denver CPU cores and the SoC now fitting in an ultra thin tablet if needed.
 
If they don't increase frequency significantly for the GPU in Parker, it doesn't sound like any worth mentioning changes if they should stay with 2 SMMs. Maxwell/desktop looks more than mature to me and from what I've heard Pascal doesn't come with fundamental changes in its ALUs either compared to Maxwell. All in all if you should be right we should expect from Parker the addition of Denver CPU cores and the SoC now fitting in an ultra thin tablet if needed.

Well they're already beating A8X quite handily and in the Android space nothing comes close so they really dont need to do that much. It should come with Denver cores yes (hopefully with big.LITTLE). Even Tegra X1 should fit in an ultra thin tablet..they just have to clock it lower.
 
Well they're already beating A8X quite handily and in the Android space nothing comes close so they really dont need to do that much. It should come with Denver cores yes (hopefully with big.LITTLE). Even Tegra X1 should fit in an ultra thin tablet..they just have to clock it lower.

Clock it low enough and you have the same or similar result as the A8X GPU; per clock both GPUs deliver the same amount of FP32 and FP16 OPs/clock. The latter is clocked a tad over 450 afaik and the X1 GPU at its peak at 1GHz. A higher frequency than that shouldn't be the wisest idea either under anything =/<16FF but they should know better.

By the way if reducing frequency would do ANY trick in the ULP SoC you would had seen a K1 SoC in a smartphone too at half its GPU frequency. The exact same reason why Apple didn't use an 8 cluster GPU for the iPhone6 at 233MHz but a quad cluster GPU at 533MHz.
 
I guess it makes some sense: it's a big device with lots of pixels and it's a flagship, so it needs a really powerful SoC, but not integrated LTE.
 
From the pricing, probably not a high volume device.

Certainly nothing like the Nexus 7.

Would be more competitive if it did have LTE.
 
Back
Top