NVIDIA Tegra Architecture

What does Qualcomm promoting FUD have anything to do with NVIDIA's sales volumes?

It doesn't. As I said though for any SoC manufacturer any marketing or promotion incentives are IMHO completely redundant. NV started more aggressive marketing when they started gaining some ground in the market and Qualcomm picked up from there.

Qualcomm was "born" mobile, remember? :D Qualcomm's strength in baseband processors (both 3G and 4G LTE) and focus on mobile computing are largely responsible for their success in the smartphone market today. The realization of a 4G LTE baseband processor from Icera should be a game changer for NVIDIA with respect to getting design wins in the smartphone space, but this baseband processor has only recently started sampling, so it will be some months before any commercial product is ready.

Grey ( for mainstream smartphones) sounds like a game changer to me with full LTE integration; for AP40 I have severe doubts just yet because amongst other reasons LTE isn't fully integrated.
 
For a high end smartphone where ultra low cost is of no concern, having a separate 4G LTE baseband processor is not much of a detriment (and in fact may give some added flexibility to the SoC designer). The Snapdragon S4 Pro in fact has a separate 4G LTE baseband processor to go with it's quad-core CPU SoC.
 
For a high end smartphone where ultra low cost is of no concern, having a separate 4G LTE baseband processor is not much of a detriment (and in fact may give some added flexibility to the SoC designer). The Snapdragon S4 Pro in fact has a separate 4G LTE baseband processor to go with it's quad-core CPU SoC.

High end products are typically low volume high margin. It's not a coincidence why I consider Grey to be a better candidate for a real "game changer". Tegra3 also found itself in high end HTC smarthpones so what?

On an irrelevant note Fudzilla seems confident that the Nexus 7 successor comes with a S4 PRO.
 
What does "4G LTE" mean, actually? There's confusion between LTE and LTE-Advanced, at first only the latter one was considered 4G and bare LTE was sort of 3.9G.. And you have that story about the cell phone that only has partial support of the spectrum (the iphone 5, I think).

I will only be interested in a cell phone that has LTE Advanced and supports all legal spectrum bands, if I want a "smart" one. Just to have my ass covered, what if LTE Advanced is deployed in my country and on frequency bands not the same as the US? (I have no idea) then a spurious "4G LTE" branding would be next to useless.

I'd be interested in low end by the way (as long as it has at least 1GB ram) and I wonder if a combination of ultra low cost SoC + separate baseband adapter is reasonable.
e.g. a MIPS SoC with the 4G modem as a separate chip. If the modem, or baseband, or radio, whatever you call is too difficult to integrate because of high technology, licenses, and plain electrical/EM difficulties then it ought to stay separate and perhaps wifi, bluetooth 4.x goes there, I don't know.
 
High end products are typically low volume high margin. It's not a coincidence why I consider Grey to be a better candidate for a real "game changer". Tegra3 also found itself in high end HTC smarthpones so what?

Integrating an Icera 4G LTE baseband processor into a Tegra SoC will naturally be a game changer for NVIDIA, but having a separate Icera 4G LTE baseband processor available is potentially a game changer too, as this will help to kickstart their baseband processor business (where the 4G LTE baseband processors can potentially be used by any vendor such as Apple that is not tied down to Qualcomm or Intel). As for Tegra 3, it was completely shut out of the USA market in the original HTC One X due to lack of an available 4G LTE baseband processor from NVIDIA.

On an irrelevant note Fudzilla seems confident that the Nexus 7 successor comes with a S4 PRO.

In my opinion, it would be logical for Google to release something higher end than the Nexus 7 in order to combat a refreshed ipad mini. Google should be able to introduce a Nexus 7.7 with a higher resolution screen and more powerful CPU/GPU (perhaps using an SoC that is immediately available such as Snapdragon S4 Pro), while still selling the Nexus 7 at a lower price point. This is not necessarily a knock against Tegra 4 per se, just a reflection of the reality that Qualcomm's SoC design cycle is not in step with NVIDIA's SoC design cycle, and a reflection that Google needs to position their tablet products appropriately based on performance and price. After all, wouldn't it be illogical for Google to sell a Nexus 7.7 with a Tegra 4 SoC that would have faster CPU and GPU performance than their own Nexus 10? It would make far more sense to refresh the Nexus 10 series with a higher performance Tegra 4 variant.
 
On a side note, it appears that Tegra 4 (with ULP Geforce GPU) should outperform Snapdragon 800 (with Adreno 330 GPU) and Snapdragon 600 (with Adreno 320 GPU) with respect to graphics performance. The Adreno 330 GPU used in the Snapdragon 800 SoC is said to have up to 50% faster graphics performance than the Adreno 320 used in the S4 Pro SoC according to Qualcomm, whereas the ULP Geforce GPU used in the Tegra 4 SoC is said to have closer to 100% faster graphics performance than the Adreno 320 used in the S4 Pro SoC (based on the claim from NVIDIA that the Tegra 4 GPU is faster than the A6X GPU).

It also appears that Tegra 4 (with quad-core A15 CPU clocked up to 1.9GHz) should outperform Snapdragon 600 (with quad-core Krait 300 CPU clocked up to 1.9 GHz) with respect to CPU performance. The Krait 300 CPU used in the Snapdragon 600 SoC is said to have up to 40% faster CPU performance than the quad-core Krait CPU [clocked up to 1.5 GHz] used in the S4 Pro SoC according to Qualcomm. The majority of CPU performance improvement in the Snapdragon 600 SoC relative to the S4 Pro SoC is due to an increase in CPU clock operating frequency. The Snapdragon 800 (with quad-core Krait 400 CPU clocked up to 2.3 GHz) should compare much more favorably to quad-core A15's. The Krait 400 CPU used in the Snapdragon 800 SoC is said to have up to 75% faster CPU performance than the quad-core Krait CPU [clocked up to 1.5 GHz] used in the S4 Pro SoC according to Qualcomm. So yet again, the majority of CPU performance improvement in the Snapdragon 800 SoC relative to the S4 Pro SoC is due to an increase in CPU clock operating frequency.
 
Do the devices also throttle in games, has Anand revisited the issue ever since? Has anyone bothered so far to try to find the source of the problem and who or what is exactly to blame? I won't point any fingers without having an answer to the last question, but ironically LG is the Google 4 manufacturer..

Brian Klug the writer from Anandtech is testing the HTC Droid DNA (S4 Pro) right now and is not experiencing the throttling issues present in Nexus 4. Testing on XDA shows Nexus 4 throttles at 36 degrees celsius wich is abnormally low and is because of a very aggressive governor. NordicHardware tested Asus Padfone 2 (S4 Pro aswell) and found zero throttling at 36 degrees celsius and made no mention of throttling otherwise either
 
Xiaomi Mi 2 is pretty hot during Most Wanted gameplay and GLB2.5 runs, I wonder how hot should be the chip inside, probably a 70-80C
 
There's a new generation of mobile GPUs ahead to unfold (for NV it might take 1 or more Tegras depending on their roadmap). With each new process (on estimate for the time being 2-3 years for each full node for the time being) and a new hw generation every 4-5 years. We're at the verge of 28nm for those and a new GPU generation at the same time, so there's one major step ahead escalating with a slower pace until the next process node and then we'll have in about 5 years or so another generation.

No it's fairly impossible that SFF SoCs will scale beyond 200mm2 and that 400mm2 paradigm sounds even exaggerated for upcoming console SoCs. Let's say they're in the 300-350@28nm league; with 2 shrinks down the road console manufacturers will be able to go down to either south of 200 or at 200mm2 and no those SoCs weren't designed from the get go for low power envelopes either.

Assume under 28nm some SoC designers go again as far as up to 160mm2 and hypothetically "just" 10mm2 get dedicated into GPU ALUs. Based on the data I've fetched you need for synthesis only 0.01mm2 at 1GHz /28HP for each FP32 unit. Consider that synthesis isn't obviously the entire story and you don't necessarily need neither 1GHz frequencies nor would you use HP for a SFF SoC. Under 28HP 1GHz however the theoretical peak is at 2 TFLOPs and you may dump down from there. Note that I'm not confident that those GPUs will reach or come close to the TFLOP range under 28nm, but I'd be VERY surprised if they won't under 20LP and no not obviously at its start.

NV and AMD are slowing things down but it's a strategic move since it's better to milk the remaining crop of gaming enthusiasts at high margins, than go for volume. However GK110 comes within this month and it'll have somewhere north of 4.5 TFLOPs FP32 (which is 3x times the GF110 peak value) and the Maxwell top dog in either late 2014 or early 2015 isn't going to scale by less if projections will be met and that's always not will all clusters enabled.

What is it's peak power envelope 8W with roughly half going for the CPU and half the GPU? Another nice uber-minority paradigm to judge from what exactly? Not only is the T604 market penentration for the moment uber-ridiculous, but we'd have to ask why on earth ARM was so eager to integrate that early FP64 units into the GPU. I'll leave it up to anyone's speculation how much it affects die area and power consumption exactly, but in extension to the former synthesis rate of 0.01mm2@28HP you need for the same process same frequency 0.025mm2. That's a factor of 2.5x and while they obviously have only a limited number of FP64 units in T604, it's not a particular wonder that they're stuck at "just" 72GFLOPs peak theoretical while a upcoming G6400 Rogue would be on estimate at the same frequency at over 170GFLOPs.

***edit: note that I have not a single clue how ARM integrated FP64, but whether they used FP32 units with loops or dedicated FP64 units, it's going to affect die area either way.

See above. It yields in GL2.5 around 4100 frames while the 554MP4@280MHz is at ~5900. Still wondering why Samsung picked a 544MP3 at high frequencies for the octacore? Even more ridiculous the latter will be somewhat faster than the Nexus10 GPU. An even dumber question from my side would be why Samsung didn't chose a Mali4xxMP8 instead for the octacore; both die area and power consumption would had been quite attractive.

See first paragraph above; as a layman I have the luxury to not mind to ridicule myself. I'm merely waiting to stand corrected for my chain of thought. Mark that I NEVER supported the original posters 5 year notion for Tegras. I said more than once that if you stretch that timespan it's NOT impossible.

Yea the mali mp8 is what my original point was ailuros.
 
Do you really think that Snapdragon S4 Pro (with quad-core Krait CPU and Adreno 320 GPU) is a "very very good" chip for smartphones? Here is what Anandtech had to say about the S4 Pro (http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review/2): "The Nexus 4 was really hot by the end of our GLBenchmark run, which does point to some thermal throttling going on here. I do wonder if the Snapdragon S4 Pro is a bit too much for a smartphone, and is better suited for a tablet at 28nm".

Any performance projections that Qualcomm (or any other company including Apple, Samsung, NVIDIA, etc.) makes should be taken with a grain of salt. Note that Qualcomm's CMO Anand Chandrasekher recently claimed that Tegra 4 (with quad-core A15 CPU and non-unified 72 "core" GPU) looks a lot like the S4 Pro (!?!). Qualcomm's CEO Paul Jacobs recently claimed that Exynos 5 Octa is a publicity stunt. Qualcomm's Snapdragon VP Raj Talluri recently claimed that there is no difference between Project Shield and a Moga controller. Qualcomm is spreading FUD on their competitor's future products that have not even been released yet.

I happen to think its an amazing chip fpr smartphones actually......the updated version snapdragon 600 will be awesome. ...manufacturers wouldnt be flocking to the soc of it wasnt good ams...they are not all stupid.

Adreno 320 is the best smartphone gpu of its generation...power vr can take several generations before that :).. (and maybe after!)
 
Hmm, not sure how you want me to be more specif.

Sorry, but you're wrong, current mobile GPUs don't have "room" to spare when running _demanding_ content, just look at GLBench 2.5, I would hardly call this demanding by todays standards, yet current mobile GPU's are still struggling with it. There is still huge scope for gain from increased performance.



What's hidden about it? S4 struggles on GLBench2.5, halti features are irrelevent to the basic shortag of horsepower.



Which _demanding_ games would you be talking about exactly? Titles are written for the lowest common denominator, which at the current time comes down to performance, uplift performance across the board and conent will become more demanding and IQ will improve. Adding Halti features doesn't aid this.


You're kidding me right? You're the one who was saying how dissappointed you are with a vendors possible choice of GPU in their next SoC based on what seems to be a combination of Halti and a bunch of ARM marketing bullshit.



Hmm, I'm pretty sure I'm not arguing against higher tech or that ES3.0 won't offer benefits, perhaps you missed the comment about supporting useful ES3.0 features via extensions (something NV and now IMG are both doing). I'm primarily pointing out that you need a balanced approach, throwing features at the platform alone will not increase IQ, you need performance to go with them. I'm pretty certain someone has already answered why it isn't a good idea to implement Halti games right now i.e.why limit your market to a few select high end devices, particularly where more IQ gains can be had just by exploiting greater performance.

The adreno 320 doesnt struggle with gl benchmark john..see above.

Im buying into any arm marketing "bullshit"..I mearly stating that all current games can run smooth on an Adreno 205/sgx 540 class of hardware...we are well beyond that perfomance now...like i said, if all games can run on smarphones with those gpus...whay couldnt there be in game options to enable more features and IQ?? Of course there can be.

On your point about sgx 5 series enabling extentions that take it near parity with halti..your right..ive just read an article that also states this...so perhaps 5 series is still a good option.
 
Tegra 4 on a Dalmore board...gpu on par with adreno 320...with twice the bandwidth and higher powered cpus...


A quick examination of the low level GLBenchmark 2.5 results should tell you that the Dalmore result is not realistic at all, and that the GPU clock operating frequency is artificially low. This is what I posted earlier in the thread:

Comparing Dalmore to the Asus Transformer Pad TF700T Infinity median result, I see the following differences:

Fill rate - Offscreen (1080p) : 1.20x
Triangle throughput: Textured - Offscreen (1080p) : 1.00x
Triangle throughput: Textured, vertex lit - Offscreen (1080p) : 1.59x
Triangle throughput: Textured, fragment lit - Offscreen (1080p) : 1.00x

The differences in bandwidth and CPU performance may explain why Dalmore is much faster in the GLBenchmark 2.5 HD Offscreen test than the highiest performance Tegra 3 variant, but they do not explain the minimal differences in low level results between Dalmore and the highest performance Tegra 3 variant.
 
What exactly makes the Adreno 320 a better smartphone GPU than the iphone 5 GPU?

Its more advanced api wise (slightly) ...im sure its a higher performance? (Slightly) and its still on early release drivers..

Although iphone 5 can be considered a real rival for top spot...in android devices nothing comes close :)
 
A quick examination of the low level GLBenchmark 2.5 results should tell you that the Dalmore result is not realistic at all, and that the GPU clock operating frequency is artificially low. This is what I posted earlier in the thread:

Comparing Dalmore to the Asus Transformer Pad TF700T Infinity median result, I see the following differences:

Fill rate - Offscreen (1080p) : 1.20x
Triangle throughput: Textured - Offscreen (1080p) : 1.00x
Triangle throughput: Textured, vertex lit - Offscreen (1080p) : 1.59x
Triangle throughput: Textured, fragment lit - Offscreen (1080p) : 1.00x

The differences in bandwidth and CPU performance may explain why Dalmore is much faster in the GLBenchmark 2.5 HD Offscreen test than the highiest performance Tegra 3 variant, but they do not explain the minimal differences in low level results between Dalmore and the highest performance Tegra 3 variant.

Yes well pointed out ;) remember reading that post now come to think of it.

We await final clocks and drivers before jumping to conclusions. ..wonder how long that is going to take with only toshiba publicly signing up?
 
Fudo is claiming Nvidia has run into problems with Tegra 4 and it might be delayed until Q4. It might explain their lack of design wins.

http://www.fudzilla.com/home/item/3...omm?utm_source=twitterfeed&utm_medium=twitter

Fudo didn't say that Tegra 4 is delayed until Q4 2013 :) He said that Tegra 4 silicon from ~ late Q3 2012 needed a respin, which is why Tegra 4 only recently started sampling. And since Tegra 4 only recently started sampling, it is still too early to announce design wins. NVIDIA claimed at CES 2013 that Tegra 4 already has more design wins than Tegra 3.
 
Fudo didn't say that Tegra 4 is delayed until Q4 2013 :) He said that Tegra 4 silicon from late Q3 to early Q4 2012 needed a respin, which is why Tegra 4 only recently started sampling.

You are right. I was in a hurry reading that, apologies for confusion :oops:
 
Back
Top