NVIDIA Tegra Architecture

Couple that with new smartphone-friendly features such as super-fast HDR camera (where Tegra 4 is reportedly ~ 10x faster than the iphone 5 smartphone), and Tegra 4 looks very nice for use in a variety of different handheld devices. Even time to market is not too bad considering that Tegra 4 equipped devices will appear on the market just a few months after the latest and greatest iphone and ipad.

The Sony Experia Z does HDR video capture right now with the "already released" SnapDragon S4 Pro powering it... always take NVIDIA's claims with a pitch of salt..
 
The Sony Experia Z does HDR video capture right now with the "already released" SnapDragon S4 Pro powering it... always take NVIDIA's claims with a pitch of salt..

From what I can tell, its not the feature of hdr that is special. .its the performance of hdr with a tegra 4....again take with a pinch of salt.
 
The roadmap also puts Tegra 3 above said Core 2 Duo, so…

A while ago, NVIDIA showed a slide where Tegra 3 was slightly above Core 2 Duo (in Coremark), so it is not surprising that NVIDIA would put Tegra 3 slightly above Core 2 Duo in this roadmap. The only anomaly I see so far is Tegra 2. It appears that Tegra 2 should be closer to 2x rather than 1x (assuming that this roadmap is a measure of CPU performance differences from one generation to another).
 
A while ago, NVIDIA showed a slide where Tegra 3 was slightly above Core 2 Duo (in Coremark), so it is not surprising that NVIDIA would put Tegra 3 slightly above Core 2 Duo in this roadmap. The only anomaly I see so far is Tegra 2. It appears that Tegra 2 should be closer to 2x rather than 1x (assuming that this roadmap is a measure of CPU performance differences from one generation to another).
Did they say what configuration/frequency Core 2 Duo they are comparing to since there's a huge performance range? I'm guessing it's something closer to the 1.07GHz/2MB/533MHZ FSB U7500 first-gen ULV Core 2 Duo than the final 3.33GHz/6MB/1333MHz FSB E8600 desktop Wolfdale.
 
A while ago, NVIDIA showed a slide where Tegra 3 was slightly above Core 2 Duo (in Coremark), so it is not surprising that NVIDIA would put Tegra 3 slightly above Core 2 Duo in this roadmap. The only anomaly I see so far is Tegra 2. It appears that Tegra 2 should be closer to 2x rather than 1x (assuming that this roadmap is a measure of CPU performance differences from one generation to another).

You are aware that nVidia's benchmarking was highly skewed, right? Coremark was compiled with two very different versions of GCC, where the x86 version was something like 50% slower than it would have been had they used the same compiler. They were pretty quickly called on it. That comparison with that Core 2 Duo model (a low clocked one, it's even more ridiculous to claim it's against the entire CPU line) was completely invalid.
 
It's a matter of perspective. NV can't afford bigger than 80mm2 SoCs while Apple can with the volumes its dealing with.

If there is enough demand from NVIDIA's partners for larger and more expensive SoC's with even higher performance, then NVIDIA will certainly do it. But in the meantime, NVIDIA appears to have done a nice job given the die size constraints in Tegra 4.

As for the respective claims we will find out soon in real time measurements if and to what degree is all of it true and we'll also see in due time what other upcoming 28nm SoCs will be able of in more apples to apples comparisons.

It is pretty clear that Tegra 4 should outperform A6X in the GLBenchmark 2.5 HD Offscreen 1080p test. After all, even with an artificially low GPU clock operating frequency where texture fillrate and triangle texture rates are merely equalized with Tegra 3 (which is completely unrealistic), Tegra 4 is already more than 3x faster than Tegra 3 in the GLBenchmark 2.5 HD Offscreen test. Considering that Tegra 4 will be operating at GPU clock frequencies that are higher than Tegra 3, it is almost a foregone conclusion that Tegra 4 will supplant A6X at the top of the GLBenchmark chart.

They managed for the first time after 6 generatons of mobile stuff to have the fastest SoC in terms of GPU performance and that for a limited amount of time

This statement is a bit disingenuous. The reality is that, since the ipad came out in 2010, NVIDIA has had only three generations of Tegra devices: Tegra 2, Tegra 3, and soon Tegra 4. So let's give credit where credit is due :) As for the comment about having the fastest SoC for a limited amount of time, you do realize that mobile SoC's get refreshed/updated roughly every 12 months (or less)? By definition, the world's fastest SoC will always be around for a limited amount of time, period.

Are you willing to bet that frequencies will be exactly the same between T40 and AP40?

Obviously the GPU (and/or CPU) clock operating frequency will be different between AP40 and T40. NVIDIA has already admitted that operating frequencies will vary to some extent, just as was the case with Tegra 3. That said, looking at Tegra 3, the GPU clock operating frequency of higher performance variants was never more than 25% higher than lower performance variants.

2x faster? :LOL:

Sure, why not? After all, the GPU performance of A6X is up to 2x faster than A6. NVIDIA has already claimed that the Tegra 4 GPU is faster than A6X (even in GLBenchmark 2.5). If we assume that the top Tegra 4 variant has a GPU that is 10% faster than A6X, and if we assume that AP40 will have a GPU that is 25% slower than than the top Tegra 4 variant, then the GPU in AP40 would be up to 1.7x faster than the GPU in the iphone 5.

Ok....in any case as above I'd rather compare AP40 when it appears in final devices vs. iPhone6 or anything else other competitors will release in the meantime.

This doesn't make sense to me. The iphone 5 was widely available only ~ 3 months ago. If a smartphone using AP40 is available within the next three months, it will clearly be a direct competitor to iphone 5. As for iphone 6, one may as well compare it to AP50 (ie. Logan / Project Denver) too.

Just in case you've missed it there are differences in power envelopes between a tablet and a smartphone SoC.

No, I didn't miss that. But looking at Tegra 3 as a guide, it is very clear that both CPU and GPU clock operating frequencies varied by no more than ~ 25% for different SoC variants. With Tegra 4, SoC die size is no bigger than Tegra 3 (although obviously transistor density is up), and NVIDIA claims that average power consumption is 45% lower than Tegra 3.

Ultra yawn for the HDR camera stuff. Let's see final devices appear on shelves and I reserve then any judgement in comparison to anything else.

Ultra yawn? The ultra-fast HDR camera photography is what it is, simply a much faster way to take photos with HDR.

And just in case you haven't also noticed the market doesn't spin around Apple by far.

The market doesn't spin around Apple, but Apple is certainly the benchmark when it comes to mobile GPU performance.

NVIDIA could for the moment only dream to yield the amount of smartphone design wins and sales volumes of either Qualcomm or Samsung.

This statement has no relevance as to the merits of the Tegra 4 SoC. That said, NVIDIA did sell many millions of Tegra 3 SoC's. There is also a slide on NVIDIA's website stating that Tegra 4 already has more design wins than Tegra 3. Clearly NVIDIA has much more room to grow their mobile business than Qualcomm, Samsung, etc.
 
You are aware that nVidia's benchmarking was highly skewed, right? Coremark was compiled with two very different versions of GCC, where the x86 version was something like 50% slower than it would have been had they used the same compiler. They were pretty quickly called on it. That comparison with that Core 2 Duo model (a low clocked one, it's even more ridiculous to claim it's against the entire CPU line) was completely invalid.

Yes, I did notice that. I am not claiming that Kal-El is truly above Core 2 Duo, all I am saying is that NVIDIA did actually show a slide placing Kal-El above Core 2 Duo [T7200] in Coremark. Given that fact, it is no surprise that they would place Kal-El above Core 2 Duo [T7200?] in the roadmap too. Irrespective of how Kal-El truly compares to any Core 2 Duo variant, the fact that NVIDIA noted a CPU on the roadmap indicates that the roadmap shows only CPU performance differences from one generation to another (although Tegra 2's standing in the roadmap as a 1x baseline measure appears to be off).
 
Last edited by a moderator:
I might be missing context here, but isn't there an underlying point that nVidia's marketing is highly dishonest and therefore isn't really even worth regarding? Given this I say it doesn't really matter if their roadmaps are only talking about CPU performance.

Their average power consumption claims are pretty meaningless too.
 
Sure, point taken, and the marketing and promotional material (from all companies) truly does need to be taken with a grain of salt.
 
If there is enough demand from NVIDIA's partners for larger and more expensive SoC's with even higher performance, then NVIDIA will certainly do it. But in the meantime, NVIDIA appears to have done a nice job given the die size constraints in Tegra 4.

I still fail to see what any of it has in common with Apple. Apple while designing its own SoCs doesn't sell or license any of it to third parties and the point still stands that with the volumes Apple is dealing die area is far less an issue then when you're dealing with times lower volumes.


It is pretty clear that Tegra 4 should outperform A6X in the GLBenchmark 2.5 HD Offscreen 1080p test. After all, even with an artificially low GPU clock operating frequency where texture fillrate and triangle texture rates are merely equalized with Tegra 3 (which is completely unrealistic), Tegra 4 is already more than 3x faster than Tegra 3 in the GLBenchmark 2.5 HD Offscreen test. Considering that Tegra 4 will be operating at GPU clock frequencies that are higher than Tegra 3, it is almost a foregone conclusion that Tegra 4 will supplant A6X at the top of the GLBenchmark chart.

At 4x times the highest T30 score I get 5964 frames, which is a notch above the 5868 frames of the iPad4 currently. Where do you see any indication that T4 will operate at higher frequencies is beyond me. It may very well be the case but so far there's no indication for it since the Dalmore results could easily have been with a say 300MHz frequency for instance.

This statement is a bit disingenuous. The reality is that, since the ipad came out in 2010, NVIDIA has had only three generations of Tegra devices: Tegra 2, Tegra 3, and soon Tegra 4. So let's give credit where credit is due :)
And your skewed POV still renders around Apple and Apple alone, while in reality they compete with NVIDIA only on an indirect level. Apple has its own OS, its own application store, designs its own SoCs and designs and manufactures also its final devices sold under their own brand name.

If you can count NV's presence in the SFF mobile market starts with GoForce1 with a succeeding refresh after that and 4 so gar Tegra SoCs.

As for the comment about having the fastest SoC for a limited amount of time, you do realize that mobile SoC's get refreshed/updated roughly every 12 months (or less)? By definition, the world's fastest SoC will always be around for a limited amount of time, period.

Exactly why it's never really a surprise if succeeding hw is better than the proceeding. In the given case it's been a couple of months since the iPad4 release but if you should know when Apple intends to release its followup I'm all eyes because I frankly don't know.


Obviously the GPU (and/or CPU) clock operating frequency will be different between AP40 and T40. NVIDIA has already admitted that operating frequencies will vary to some extent, just as was the case with Tegra 3. That said, looking at Tegra 3, the GPU clock operating frequency of higher performance variants was never more than 25% higher than lower performance variants.

20% for T3 (T30= 520, AP30= 416). 20% lower frequency indicates also roughly 20% lower performance all other variables equal. With speculative math (based on NV's own claim that T4 is from 3x to 4x times faster in benchmarks and games) the former speculative GL2.5 score is at 5964 - 20% = 4771. Compared to the iPhone5 that's by far not 2x times, rather 1.45x times and that's not even the point here since Qualcomm's Adreno320 for instance is not only faster but will receive a refresh soon under the Adreno330.


Sure, why not? After all, the GPU performance of A6X is up to 2x faster than A6.

5868 vs. 3290 at the moment? That's 1.78x times actually and probably a best case scenario with GL2.5 since there aren't any games as demanding as it still available. Else see above.

NVIDIA has already claimed that the Tegra 4 GPU is faster than A6X (even in GLBenchmark 2.5). If we assume that the top Tegra 4 variant has a GPU that is 10% faster than A6X, and if we assume that AP40 will have a GPU that is 25% slower than than the top Tegra 4 variant, then the GPU in AP40 would be up to 1.7x faster than the GPU in the iphone 5.

See even further above; and since that "the mobile market consists only of Apple disease" is increasing now tell me when Apple is going to release their next generation smartphone and with what performance charecteristics. I don't for one. Since no AP40 smartphones are purchasable at this point I disgress at such far fetched assumptions until we have the first reliable piece of information.

This doesn't make sense to me. The iphone 5 was widely available only ~ 3 months ago. If a smartphone using AP40 is available within the next three months, it will clearly be a direct competitor to iphone 5. As for iphone 6, one may as well compare it to AP50 (ie. Logan / Project Denver) too.

Again do you know Apple's roadmap while no other "mortal" here knows? It's still an assumption and Logan device availability might be arrive roughly a year after Wayne device availability given NV's yearly SoC cadence.

No, I didn't miss that. But looking at Tegra 3 as a guide, it is very clear that both CPU and GPU clock operating frequencies varied by no more than ~ 25% for different SoC variants. With Tegra 4, SoC die size is no bigger than Tegra 3 (although obviously transistor density is up), and NVIDIA claims that average power consumption is 45% lower than Tegra 3.

Wouldn't it be more WISE to wait for real time measurements when T4 devices ship and get analyzed by independent 3rd parties? I'm not saying that the power consumption isn't for real, but I don't dare to speculate under which boring scenario that even holds up to be true.

Ultra yawn? The ultra-fast HDR camera photography is what it is, simply a much faster way to take photos with HDR.

I am allowed to have an opinion if you don't mind and not being excited about stuff like that or not?

The market doesn't spin around Apple, but Apple is certainly the benchmark when it comes to mobile GPU performance.

Apple is also coincidentially that kind of company that could tolerate a <1 year product cadence. Yes or no? Whether it's reasonable or not is another chapter. Of course does the highest GPU performance create a halo effect, but if it would be the defining factor for anything it underestimates by a lot all other efforts Apple puts into its products (sw environment included) and it wouldn't explain why Qualcomm continiously cleans house with its smartphone SoC design wins even up to Adreno3xx.

This statement has no relevance as to the merits of the Tegra 4 SoC. That said, NVIDIA did sell many millions of Tegra 3 SoC's. There is also a slide on NVIDIA's website stating that Tegra 4 already has more design wins than Tegra 3. Clearly NVIDIA has much more room to grow their mobile business than Qualcomm, Samsung, etc.

In spring 2012 their GPU mobile share in the small form factor market was at roughly 1/10th that of Qualcomm (3.2% vs. 33%) and yes of course doesn't it have any relevance from your POV. At best it's going to climb to what this year? Twice as much? I'd be even much more generous if you want it's still peanuts compared to Qualcomm and Samsung. If we'd even go as far to count GPU market share by GPU architecture the results were even more disheartening.

Sure it'll increase and even more so if years go by, but for the moment they're still a mouse chasing after elephants. But there's also a reason why elephants are so scared of mice.
 
I still fail to see what any of it has in common with Apple.

Tegra 4 and A6X are indirect competitors to each other. Like it or not, these SoC's will continue to be compared and contrasted with each other. The fact that the Tegra 4 SoC appears to have better CPU and GPU performance than the A6X SoC that is roughly 50% larger in die size area is a notable achievement, irrespective of whether or not Apple can easily afford to use large SoC die sizes.

Where do you see any indication that T4 will operate at higher frequencies is beyond me.

From Anandtech (http://www.anandtech.com/show/6666/the-tegra-4-gpu-nvidia-claims-better-performance-than-ipad-4): "Tegra 4 features six Vec4 vertex units (FP32, 24 cores) and four 3-deep Vec4 pixel units (FP20, 48 cores). The result is 6x the number of ALUs as Tegra 3, all running at a max clock speed that's higher than the 520MHz NVIDIA ran the T3 GPU at".


With speculative math (based on NV's own claim that T4 is from 3x to 4x times faster in benchmarks and games) the former speculative GL2.5 score is at 5964 - 20% = 4771. Compared to the iPhone5 that's by far not 2x times, rather 1.45x times

It is not logical that Tegra 4 would be limited to being 3-4x faster than Tegra 3 in all cases. First of all, if that were true, then Tegra 4 would not be able to outperform A6X in the GLBenchmark 2.5 HD Offscreen test. Second of all, Tegra 4 already outperforms Tegra 3 by more than 3x in the GLBenchmark 2.5 HD Offscreen test with crippled clock operating frequencies. Still, regardless of where the final clock operating frequencies end up, I don't think it is a stretch of the imagination to conclude that AP40 will handily outperform A6 with respect to GPU performance. Of course, we will have to wait for actual benchmark results to speak with any precision, but the writing is on the wall so-to-speak.

In spring 2012 their GPU mobile share in the small form factor market was at roughly 1/10th that of Qualcomm (3.2% vs. 33%) and yes of course doesn't it have any relevance from your POV. At best it's going to climb to what this year? Twice as much?

Again, this has no relevance to a discussion about Tegra's roadmap or Tegra's architecture relative to other architectures. You are telling me that Qualcomm and Samsung are much bigger companies than NVIDIA, and that they have more mobile marketshare too. Tell me something I don't know :D That in no way diminishes what NVIDIA was able to accomplish with Tegra 4.
 
This doesn't make sense to me. The iphone 5 was widely available only ~ 3 months ago. If a smartphone using AP40 is available within the next three months, it will clearly be a direct competitor to iphone 5. As for iphone 6, one may as well compare it to AP50 (ie. Logan / Project Denver) too.
Tegra 4 reportedly had to undergo a re-spin which is why there haven't been a bunch of design announcements yet. Devices should be announced later in Q1 for shipment in Q2. Being a chip seller nVidia can make prognostications about being the fastest mobile SoC, but the conditions on the ground could change by the time it actually ships. Apple has the luxury of announcing and shipping within a week or two. In the case of the next iPhone, analysts seem to pretty much have a consensus, as much as a bunch of people guessing the same thing is insightful, that the iPhone 5S has been pulled forward to a mid-year release, probably in June at WWDC. There's already been developer logs and supposed parts leaks so this seems credible. If both Tegra 4 smartphones and the iPhone 5S ship on that schedule, early Q2 and late Q2 respectively, they're pretty much direct competitors. "S" revisions have come with pretty large performance jumps, there's the Adreno 330 as Ailuros mentions, and the possibility of a March iPad refresh (although there's been less evidence of that) which could pre-empt Tegra 4 availability on the tablet side. Q2 looks to be a very active quarter so I think we'll have to what makes it into customers hands and when before giving out the performance crown.

This was the chip they were comparing against:
http://ark.intel.com/products/27255/Intel-Core2-Duo-Processor-T7200-4M-Cache-2_00-GHz-667-MHz-FSB

As noted by Exophase, there were also compiler issues at work.

I wouldn't be too surprised if Cortex-A15 is faster clock for clock than Core2 though.
People have argued that a Core 2 Duo is sufficient for most people's general computer needs, so it is notable that mobile chips are basically there in a more portable/convenient form factor.
 
Last edited by a moderator:
Q2 looks to be a very active quarter so I think we'll have to what makes it into customers hands and when before giving out the performance crown.

This is true. At the end of the day, rather than placing emphasis on achieving the performance crown at all costs, the important thing for SoC designs such as Tegra is to have a good balance of performance and power consumption at a good price.
 
Maybe jumping in half way through with something thats already been discussed before..but just read an article about tegra 4 getting 32fps on gl benchmark 2.5....off screen....thats decent but not great...considering adreno 320 gets aound that already...

Edit..ah just read all posts above..looks like tegra 4 will improve its performance via clocks or something...
Still this has gotta make qualcomm smile..they may just have the most powerfull/advanced (api) smartphone gpu till the next iphone 5s....depending of course on galaxy s4 processor.
 
Maybe jumping in half way through with something thats already been discussed before..but just read an article about tegra 4 getting 32fps on gl benchmark 2.5....off screen....thats decent but not great...considering adreno 320 gets aound that already...

Edit..ah just read all posts above..looks like tegra 4 will improve its performance via clocks or something...
Still this has gotta make qualcomm smile..they may just have the most powerfull/advanced (api) smartphone gpu till the next iphone 5s....depending of course on galaxy s4 processor.

Take a look at the low level benchmarks of the Dalmore Tegra 4 test system here: http://webcache.googleusercontent.com/search?q=cache:BHeFcdcCZt0J:www.glbenchmark.com/phonedetails.jsp%3Fbenchmark%3Dglpro25%26D%3DDalmore%2BDalmore%26testgroup%3Dlowlevel+http://www.glbenchmark.com/phonedetails.jsp%3Fbenchmark%3Dglpro25%26D%3DDalmore%2BDalmore%26testgroup%3Dlowlevel&cd=1&hl=en&ct=clnk&gl=uk . Note that the Dalmore Tegra 4 test system has texture fillrate and triangle texture rates that are roughly equalized with Tegra 3. Think about the significance of that for a second. With texture fillrate and triangle texture rates equalized with Tegra 3 (which is completely unrealistic and implies severely crippled clock operating frequencies on Tegra 4), Tegra 4 is able to achieve greater than 3x performance improvement over Tegra 3 in the high level GLBenchmark 2.5 Egypt HD Offscreen 1080p test!

FYI, the GLBenchmark 2.5 Egypt HD Offscreen 1080p results for Adreno 320 are slower than even the crippled Dalmore Tegra 4 test system. The top GLBenchmark results for Adreno 320 are highly misleading too. According to Anandtech, the LG Optimus G (with Adreno 320 GPU) "can't complete a single, continuous run of GLBenchmark 2.5 - the app will run out of texture memory and crash if you try to run through the entire suite in a single setting. The outcome is that the Optimus G avoids some otherwise nasty throttling". When using an Adreno 320 equipped smartphone that is actually able to achieve a continuous run of GLBenchmark 2.5 by throttling (such as the Google Nexus 4 phone), the GLBenchmark 2.5 Egypt HD Offscreen 1080p results for Adreno 320 are only 18 fps! See the data here: http://images.anandtech.com/graphs/graph6425/51288.png
 
Tegra 4 and A6X are indirect competitors to each other. Like it or not, these SoC's will continue to be compared and contrasted with each other.

Like it or not die area is mostly a matter of cost for SoC manufacturers and I repeatedly explained why it's a non issue for Apple. Equally like it or not the end user couldn't care less if a SoC is 80, 120 or more mm2.

The fact that the Tegra 4 SoC appears to have better CPU and GPU performance than the A6X SoC that is roughly 50% larger in die size area is a notable achievement, irrespective of whether or not Apple can easily afford to use large SoC die sizes.
How much better is rather the key, under which variables and for how long.

From Anandtech (http://www.anandtech.com/show/6666/the-tegra-4-gpu-nvidia-claims-better-performance-than-ipad-4): "Tegra 4 features six Vec4 vertex units (FP32, 24 cores) and four 3-deep Vec4 pixel units (FP20, 48 cores). The result is 6x the number of ALUs as Tegra 3, all running at a max clock speed that's higher than the 520MHz NVIDIA ran the T3 GPU at".
Which is where an indication for Dalmore's frequency again? Where does it defeat the guess that it might have been at 300MHz?

It is not logical that Tegra 4 would be limited to being 3-4x faster than Tegra 3 in all cases. First of all, if that were true, then Tegra 4 would not be able to outperform A6X in the GLBenchmark 2.5 HD Offscreen test.
I took NV's own statement. At 4x times it is faster; NV nowhere indicated how much faster exactly just that it's faster. Of course the more the merrier for everyone but I typically keep my expectations low in order to avoid disappointments. You may wish for 10k frames but I don't think it's going to happen anytime soon.

Second of all, Tegra 4 already outperforms Tegra 3 by more than 3x in the GLBenchmark 2.5 HD Offscreen test with crippled clock operating frequencies.
3600+ divided by 1491 gives me a far more humble difference, but I can understand why you want to exaggerate with every other sentence as long as it regards NV.

Still, regardless of where the final clock operating frequencies end up, I don't think it is a stretch of the imagination to conclude that AP40 will handily outperform A6 with respect to GPU performance. Of course, we will have to wait for actual benchmark results to speak with any precision, but the writing is on the wall so-to-speak.
No disagreement that it'll be faster. Again I'm keeping low to avoid disappointments. Besides would you rather have something like =/>4.5k frames and an absolutely stable device under every condition or something that is subject to constant thermal throttling when the going gets tough? I doesn't have to be but again performance is not everything.


Again, this has no relevance to a discussion about Tegra's roadmap or Tegra's architecture relative to other architectures. You are telling me that Qualcomm and Samsung are much bigger companies than NVIDIA, and that they have more mobile marketshare too. Tell me something I don't know :D That in no way diminishes what NVIDIA was able to accomplish with Tegra 4.
Most of us here know the sizes of each company. The point here actually is that Qualcomm will still sell significantly more smartphone SoCs even if their GPUs are slower than in T4. Else in the hope that you finally might understand what I'm pointing at: yes T4 is far better step in the right direction compared to former NV SoCs, however it's NOT going to take the SFF mobile market by storm. The mouse just became a rat but that's about it.

Finally it's highly entertaining how GL2.5 became all of the sudden highly important while just a couple of months again it was almost rendered as irrelevant for some. Boy how easily opinions and facts change within a blink of an eye :LOL:
 
Take a look at the low level benchmarks of the Dalmore Tegra 4 test system here: http://webcache.googleusercontent.com/search?q=cache:BHeFcdcCZt0J:www.glbenchmark.com/phonedetails.jsp%3Fbenchmark%3Dglpro25%26D%3DDalmore%2BDalmore%26testgroup%3Dlowlevel+http://www.glbenchmark.com/phonedetails.jsp%3Fbenchmark%3Dglpro25%26D%3DDalmore%2BDalmore%26testgroup%3Dlowlevel&cd=1&hl=en&ct=clnk&gl=uk . Note that the Dalmore Tegra 4 test system has texture fillrate and triangle texture rates that are roughly equalized with Tegra 3. Think about the significance of that for a second. With texture fillrate and triangle texture rates equalized with Tegra 3 (which is completely unrealistic and implies severely crippled clock operating frequencies on Tegra 4), Tegra 4 is able to achieve greater than 3x performance improvement over Tegra 3 in the high level GLBenchmark 2.5 Egypt HD Offscreen 1080p test!
http://www.anandtech.com/show/6472/ipad-4-late-2012-review/4

Comparing Dalmore Tegra 4 against Nexus 7 Tegra 3
Fill rate - Offscreen (1080p) : 1.45x
Triangle throughput: Textured - Offscreen (1080p) : 1.27x
Triangle throughput: Textured, vertex lit - Offscreen (1080p) : 1.96x
Triangle throughput: Textured, fragment lit - Offscreen (1080p) : 1.24x

Well it's more like the Tegra 4 low level results are an average 50% higher than Tegra 3 rather than equalized. The vertex lit triangle throughput scaling seems abnormally high compared to the other results. I wonder if nVidia make specific improvements in that area in hardware compared to the rest or is that just an early driver quirk?
 
http://www.anandtech.com/show/6472/ipad-4-late-2012-review/4

Comparing Dalmore Tegra 4 against Nexus 7 Tegra 3
Fill rate - Offscreen (1080p) : 1.45x
Triangle throughput: Textured - Offscreen (1080p) : 1.27x
Triangle throughput: Textured, vertex lit - Offscreen (1080p) : 1.96x
Triangle throughput: Textured, fragment lit - Offscreen (1080p) : 1.24x

Well it's more like the Tegra 4 low level results are an average 50% higher than Tegra 3 rather than equalized. The vertex lit triangle throughput scaling seems abnormally high compared to the other results. I wonder if nVidia make specific improvements in that area in hardware compared to the rest or is that just an early driver quirk?

I'm usually taking the fastest T3 results in the Kishonti database:

http://www.glbenchmark.com/phonedet...former+Pad+TF700T+Infinity&testgroup=lowlevel

As for your last question I'd rather see final results before jumping to any preliminary conclusions, but keep in mind that having N more VS units is one aspect, another being the size of the trisetup ;)
 
Take a look at the low level benchmarks of the Dalmore Tegra 4 test system here: http://webcache.googleusercontent.c...e&testgroup=lowlevel&cd=1&hl=en&ct=clnk&gl=uk . Note that the Dalmore Tegra 4 test system has texture fillrate and triangle texture rates that are roughly equalized with Tegra 3. Think about the significance of that for a second. With texture fillrate and triangle texture rates equalized with Tegra 3 (which is completely unrealistic and implies severely crippled clock operating frequencies on Tegra 4), Tegra 4 is able to achieve greater than 3x performance improvement over Tegra 3 in the high level GLBenchmark 2.5 Egypt HD Offscreen 1080p test!

FYI, the GLBenchmark 2.5 Egypt HD Offscreen 1080p results for Adreno 320 are slower than even the crippled Dalmore Tegra 4 test system. The top GLBenchmark results for Adreno 320 are highly misleading too. According to Anandtech, the LG Optimus G (with Adreno 320 GPU) "can't complete a single, continuous run of GLBenchmark 2.5 - the app will run out of texture memory and crash if you try to run through the entire suite in a single setting. The outcome is that the Optimus G avoids some otherwise nasty throttling". When using an Adreno 320 equipped smartphone that is actually able to achieve a continuous run of GLBenchmark 2.5 by throttling (such as the Google Nexus 4 phone), the GLBenchmark 2.5 Egypt HD Offscreen 1080p results for Adreno 320 are only 18 fps! See the data here: http://images.anandtech.com/graphs/graph6425/51288.png

Thanks I hadnt seen anandtechs run of the optimus g...well I didnt know adreno 320 suffered from such issues in that benchmark..however I had seen some mobile gaming on you tube where a tegra 3 nexus 7 was able to offer smoother performance in gta 3 than the optimus g..strange?..maybe that particular game is optimised for tegra?

Well I find that anandtech result a bit odd considering adreno 225 gl benchmark 2.5 results..
Adreno 320 was supposed to be 2-3 times faster than previous generation.

I think ill wait to draw any conclusions from tegra 4..we just dont have enough solid info about clock speed to form a solid opinion yet...with the execution resources on offer as well as other minor architectural improvements I would expect tegra 4 to be hovering around ipad 4 performance.
 
Last edited by a moderator:
Back
Top