NVIDIA Tegra Architecture

Yes, when I say CPU I mean the software.

Doing a virtual cache mapping is problematic because the associativity and line size and what have you actually has a practical impact on how the software works, just as the size does. The have to emulate the actual behavior, and the only way to really do that in a cache is to have the same cache arrangement..

Again, depending on how you remap the cache-ops, the only thing that needs to be guaranteed is that data that is more pessimistically coherent. Performance on the LP core need not be very important.

Of course, it's not like nVidia is using a custom cache controller to begin with.

For Tegra 3 at least.

Full swapping is a standard usage model for big.LITTLE, and ARM provides firmware code for it - and it's the swapping latency that ARM is describing (what else would they be?). Sure, the software triggers the swapping in big.LITTLE, but why would that increase the latency requirements? If anything, you would want something that transparently swaps you to be as low latency as possible..

I thought swapping was considered a stop-gap solution until the OS is heterogeneous aware.
 
GPUs with a better balance of fill rate than the Adrenos compare more favorably in Rightware's navigation test than in Taiji or Hoverjet, and the navigation benchmark does use a balance of simpler shaders. So, Adreno's shader strength would seem to be one factor of significance in its comparatively good Basemark performance, yet benchmark design and optimizations, or software bugs conversely, have also played a big part.

As mentioned, the consensus on Qualcomm's compiler is that it leaves a great deal of performance unexploited, and this should be true of other mobile GPUs and their compilers as well to a still-surprisingly high degree. However, as alluded to in the past, steps to correct this could be underway.
 
Does anyone know if GeForce ULP is using a scalar SIMT arrangement for its "cores"? That'd potentially make it easier to compile for. Anandtech seemed to suggest that it works this way, and may be part of why nVidia calls them cores in the first place.
 
Does anyone know if GeForce ULP is using a scalar SIMT arrangement for its "cores"? That'd potentially make it easier to compile for. Anandtech seemed to suggest that it works this way, and may be part of why nVidia calls them cores in the first place.
NVIDIA PerfHUD ES (which is very simple to setup on a Transformer Prime but harder on any other device AFAIK) gives you the cycle count and register count for the VS and PS when you click on a program. The results are somewhat confusing (look like scalar on first sight but actually isn't) and don't match either a traditional Vec4 (ala NV30) or a pure scalar SIMT (ala G80) and I probably shouldn't get into what additional tests I ran and what my exact conclusions were, but I think it's fair to say the efficiency is much better than a Vec4 architecture but worse than a pure scalar SIMT ala G80.

Keep in mind they don't support dynamic branching at all in the PS (predication only) so that might have lead them to some unusual trade-offs compared to modern architectures. I'm still somewhat amazed (not in a good way) that they can get away with no pixel shader branching or 24-bit depth buffers but heh...
 
GPUs with a better balance of fill rate than the Adrenos compare more favorably in Rightware's navigation test than in Taiji or Hoverjet, and the navigation benchmark does use a balance of simpler shaders. So, Adreno's shader strength would seem to be one factor of significance in its comparatively good Basemark performance, yet benchmark design and optimizations, or software bugs conversely, have also played a big part.

As mentioned, the consensus on Qualcomm's compiler is that it leaves a great deal of performance unexploited, and this should be true of other mobile GPUs and their compilers as well to a still-surprisingly high degree. However, as alluded to in the past, steps to correct this could be underway.

Yea lets hope we get a big improvement in Adreno 320:smile:
 
The Asus TF300 is coming with DDR3L instead of the LPDDR2 in other models of Tegra 3.
Assuming that this DDR3L is clocked at 1500MHz instead of the 1066MHz in LPDDR2, could we see some substantial improvements in general performance?
 
The Asus TF300 is coming with DDR3L instead of the LPDDR2 in other models of Tegra 3.
Assuming that this DDR3L is clocked at 1500MHz instead of the 1066MHz in LPDDR2, could we see some substantial improvements in general performance?

http://www.glbenchmark.com/phonedetails.jsp?D=Asus+Transformer+Pad+TF300TG&benchmark=glpro21

There's probably something wrong here with the Egypt score, but hey it's still an answer to your question even if it's probably a joke :devilish:
 
fWTzF.jpg


http://www.theverge.com/2012/5/23/3038125/nvidia-reveals-kai-199-quad-core-reference-design
 
Interesting.

I'm looking to buy a cheap-ish Android tablet this summer so this sounds like good news to me. That said, if $199 translates across to £199 here in the UK, I'll explode in a fit of incandescent fury! I'm still not decided whether to go for a 7" or 9.7/10.1" tablet as yet, however.

NVidia needs to bring Tegra to budget sector as tablets containing the new dual-A9 Rockchip and AML chips are just beginning to be released in China. The Rockchip RK3066, especially, looks to be a pretty good performer with a Quad-core Mali GPU.
 
$200 won't result into £200 but it'll definitely result into 200€, which is some £160.


I think the Tegra 3 is a very good tablet solution overall, and an excellent choice for a mid/low-end "barebone".

My one and only complaint with the Tegra 2 in my Sony Tablet S is the lack of high-profile h/x.264 decoding, and subsequently poor performance in remote desktop applications (splashtop, for example).


I don't really care for the extra 2 cores in Tegra 3, but the fact that it could play any video I have without a specific reencoding, as well as better performance for a remote desktop client would be the cherry on top for me.

Too bad I won't be departed from my Sony "Bravia" IPS screen anytime soon. I find it way too pretty to look at. :)
 
http://www.anandtech.com/show/5862/nvidia-announces-icera-500-the-baseband-in-grey

FOo0p.png


Nvidia just announced their upcoming Icera baseband, Icera 500, which is an LTE UE Category 3 device (building on 410 which is UE Category 2) and includes HSPA+ and other features from Icera 450. Because Icera's architecture is a software defined radio, it was hinted we might even see beyond UE Category 3 in the future. We'll have to wait and see for further details about what other capabilities Icera 500 brings.
 
Hi by now most of you will know about tegra design win in the new windows rt surface tablet...this goes to show just how good their windows 8 drivers are, probably be the first time we can test the true performance of tegra 3 without ui software android bloat.

I'm very interested in this tablet to see what kind of performance tegra 3 performs at.
 
Hi by now most of you will know about tegra design win in the new windows rt surface tablet...this goes to show just how good their windows 8 drivers are, probably be the first time we can test the true performance of tegra 3 without ui software android bloat.

I'm very interested in this tablet to see what kind of performance tegra 3 performs at.

It should do great at 1366x768 pixels. Interesting how it will fare in a market full with higher resolution tablets with the very same SoC.
 
Wayne isn't that far away :)

Was Wayne aka Tegra4 even announced?
Cause I can't remember reading any kind of press release only rumours that it was shown behind closed doors on MWC'12 to the selected few. Besides, wouldn't it be too optimistic to expect tegra4 when it wasn't even officially announced? Unless they'll announce it along with devices ready for sell, but that would be something completely new for them.
 
This tablet is aimed at a september/october release. I think Wayne would be a bit too optimistic for this launch. Tegra 3+ is my guess
 
I didn't read an official release date. We know Windows 8 will be released around September/October. However, the Surface itself is up in the air. It'd frankly be a bad idea for Microsoft to launch their tablet at the same time as their OEM partners.
 
Back
Top