NVIDIA Tegra Architecture

I'm thinking that the drop off in performance at the end of these GFXBench runs aren't due to thermal throttling but something else, like voltage drop on the battery.
 
I don't disagree. But it's incorrect to point out that it's uncomfortably close to CMOS limits when it's 40C away from that limit.

I don't disagree either.


Thanks. Those values seem like they would be uncomfortable (as Nebu says) but here the author says they're not. I'm not sure what to make of this. Then again, Nebu was using GFXBench—T-Rex, not Trine 2, and perhaps the former is more demanding.
 
Nebu,

In your interesting update at Anandtech (http://www.anandtech.com/show/8329/revisiting-shield-tablet-gaming-ux-and-battery-life), you write the following: 85C may be pushing it for system issues (user experience, certain limits imposed by product manufacturers, etc.) but it's not in anyway considered close to the maximum safe temperature of most CMOS logic: that temperature is universally considered to be 125C.
Joshua wrote that, I don't necessarily share the same opinion (80-90°C is usually standard throttling temp for SoCs). Nevertheless to actually reach and stay at 85°C is quite high and beyond other mobile SoCs.

I'm thinking that the drop off in performance at the end of these GFXBench runs aren't due to thermal throttling but something else, like voltage drop on the battery.
It's mentioned in the article, the battery warning on low battery is superimposed over the test and it causes rendering overhead making the GPU run at higher frequency.
Thanks. Those values seem like they would be uncomfortable (as Nebu says) but here the author says they're not. I'm not sure what to make of this. Then again, Nebu was using GFXBench—T-Rex, not Trine 2, and perhaps the former is more demanding.
I don't know what the author is thinking but 47°C/55°C skin temp is what I consider to be damn hot. Most reviewers out there would agree on this subjective measure.
 
Last edited by a moderator:
I'm thinking that the drop off in performance at the end of these GFXBench runs aren't due to thermal throttling but something else, like voltage drop on the battery.

Yeah, it doesn't look like thermal throttling, because the temps on Shield tablet are fairly steady and maintained for a long period of time up until the point where the battery indicator % becomes very low (after > 110 continuous looped runs of the GFXBench T-Rex Onscreen benchmark). I even recall reading at GeForce forums Shield section that CPU/GPU frequencies drop when the battery indicator % gets below ~ 20%.
 
Thanks. Those values seem like they would be uncomfortable (as Nebu says) but here the author says they're not. I'm not sure what to make of this. Then again, Nebu was using GFXBench—T-Rex, not Trine 2, and perhaps the former is more demanding.

There is a hot spot in Shield tablet where TK1 is located, but the heat dissipation capability of the tablet seems to be very good, and the hot spot is located in an area that one normally does not touch during handheld gaming.

Here is what the Trine 2 developer Frostbyte had to say: "Since Trine 2: Complete Story was now being developed for a tablet device, we had to make sure the device would not get too hot (since it's lacking powerful fans). In the latest version, the Shield Tablet (K1) can pretty much keep 720p30 without heating up too much".
 
(80-90°C is usually standard throttling temp for SoCs). Nevertheless to actually reach and stay at 85°C is quite high and beyond other mobile SoCs
No argument that this is what SOCs (and others) do. Just pointing out that the reason of doing this has nothing to do with CMOS reliability and everything with system engineering.

Those other mobile SOCs should be just as reliable at 125C as well.
 
It's mentioned in the article, the battery warning on low battery is superimposed over the test and it causes rendering overhead making the GPU run at higher frequency.

No I mean for the uncapped case, not the capped case. Where the GPU actually drops in frequency near the end. 2+ hours is a really long time for anything in the tablet to reach a steady state temperature and every temperature reading you showed had it hit much quicker.
 
So is there any real world significance to the CompuBench compute benchmarks that accompany the GFXBench graphics benchmarks ( http://compubench.com/result.jsp )?

The performance advantage for mobile Kepler (in TK1) is often ridiculous compared to Adreno 330 ( http://compubench.com/compare.jsp?b...+S5+(SM-G900,+SM-G900x,+SC-04F,+SCL23)&cols=2 ):

Face detection: 6.8x
Particle simulation: 2.5x
Provence: 1.6x
Gaussian blur: 9.3x
Histogram: 5.8x
Julia Set: 82.7x (!)
Ambient Occlusion: 369.2x (!)

Presumably this is due to both hardware architecture and software drivers?

The differences in raw theoretical GFLOPS throughput between these two GPU's is not anywhere near this large for the most part.
 
RenderScript as a compute platform *benchmark* is a horrible horrible thing. The above scenario is a perfect showcase for the problem that Google created by sidelining OpenCL.
 
RenderScript as a compute platform *benchmark* is a horrible horrible thing. The above scenario is a perfect showcase for the problem that Google created by sidelining OpenCL.


+1
I would never use RenderScript for GPU compute. It's looking like the only real way to do GPU compute is GL compute shaders. Back to the future...
 
Not dead after all

Nvidia's Denver Processor

1:30PM PDT

http://www.hotchips.org
It will be interesting to see what Nvidia cooked up with three years of R&D.

Remember how some sites claimed that Project Denver is pretty much dead? And how we kept on posting news about it regardless? Well, guess again, because Nvidia has just scheduled an event under the banner of “NVIDIA’s Denver Processor”. So unless they have scheduled an entire event just to say that the project is canceled, this here is the official reveal of Nvidia’s homegrown “super dual core”.

Tegra-K1-Project-Denver.jpg


http://wccftech.com/nvidia-unveil-denver-processor-armv8-11th-august
 
Jebus I just sat up in the anticipation of reading something new and interesting about Denver and all I get is that? Hasn't that wccftech site anything better to present than the bleedingly obvious or some plagiarized stuff for decoration in between?
 
Jebus I just sat up in the anticipation of reading something new and interesting about Denver and all I get is that? Hasn't that wccftech site anything better to present than the bleedingly obvious or some plagiarized stuff for decoration in between?

We still have to wait (like everyone else) for a little over 4 1/2 hours until the conference starts.

Hopefully some site (AnandTech, RWT) will have the details (and I hope a white paper link) soon after the conference.
 
Is anyone still using standard ARM CPU cores any more or have they all moved to custom designs?

That is for better performance or perf/watt?
 
Is anyone still using standard ARM CPU cores any more or have they all moved to custom designs?

That is for better performance or perf/watt?

Regarding the current off-the-shelves consumer products, Apple and Qualcomm (for their higher-end SoCs) are the only ones using custom designs. Everyone else is using ARM's Cortex A7, A9, A15, etc. Mid and low-end Snapdragon SoCs are using Cortex A7.

For this following year, Qualcomm is set to transition to ARM cores throughout their whole range, using Cortex A57 and A53 in big.LITTLE.
I think only nVidia (Denver) and Apple will be using custom designs during most of 2015.

So I guess the answer would be yes, many ARM SoCs are using standard CPU cores. I guess the second generation of big.LITTLE combos may become hard to beat in performance/watt.
 
Hmm, didn't know Qualcomm was going back to ARM core instead of continuing with Krait and other custom cores.

Unless Nvidia's products become hits, then I guess ARM cores will dominate the market, including possibly the high-end of the market.
 
I wouldn't be surprised if the only reason that Qualcomm is using ARM's Cortex A57 and A53 cores in their high end chips is to be faster to market with a 64-bit high end chipset. That any follow up high end chipsets of them will use an ARMv8 ISA CPU design of their own. Mind you, I'm not expecting this to be the case. Only that I wouldn't be surprised if it were.
 
from hotchips, Denver CPU at 2.5GHz faster than intel haswell 2955U (1.4GHz base clock) on SPECint. So also much faster than Silvermont
let's see how it will translate in real world performance...
 
Back
Top