I don't disagree. But it's incorrect to point out that it's uncomfortably close to CMOS limits when it's 40C away from that limit.
Review with Thermal Imaging:
http://www.hardwarecanucks.com/foru...iews/67076-nvidia-shield-tablet-review-7.html
Joshua wrote that, I don't necessarily share the same opinion (80-90°C is usually standard throttling temp for SoCs). Nevertheless to actually reach and stay at 85°C is quite high and beyond other mobile SoCs.Nebu,
In your interesting update at Anandtech (http://www.anandtech.com/show/8329/revisiting-shield-tablet-gaming-ux-and-battery-life), you write the following: 85C may be pushing it for system issues (user experience, certain limits imposed by product manufacturers, etc.) but it's not in anyway considered close to the maximum safe temperature of most CMOS logic: that temperature is universally considered to be 125C.
It's mentioned in the article, the battery warning on low battery is superimposed over the test and it causes rendering overhead making the GPU run at higher frequency.I'm thinking that the drop off in performance at the end of these GFXBench runs aren't due to thermal throttling but something else, like voltage drop on the battery.
I don't know what the author is thinking but 47°C/55°C skin temp is what I consider to be damn hot. Most reviewers out there would agree on this subjective measure.Thanks. Those values seem like they would be uncomfortable (as Nebu says) but here the author says they're not. I'm not sure what to make of this. Then again, Nebu was using GFXBench—T-Rex, not Trine 2, and perhaps the former is more demanding.
I'm thinking that the drop off in performance at the end of these GFXBench runs aren't due to thermal throttling but something else, like voltage drop on the battery.
Thanks. Those values seem like they would be uncomfortable (as Nebu says) but here the author says they're not. I'm not sure what to make of this. Then again, Nebu was using GFXBench—T-Rex, not Trine 2, and perhaps the former is more demanding.
No argument that this is what SOCs (and others) do. Just pointing out that the reason of doing this has nothing to do with CMOS reliability and everything with system engineering.(80-90°C is usually standard throttling temp for SoCs). Nevertheless to actually reach and stay at 85°C is quite high and beyond other mobile SoCs
It's mentioned in the article, the battery warning on low battery is superimposed over the test and it causes rendering overhead making the GPU run at higher frequency.
RenderScript as a compute platform *benchmark* is a horrible horrible thing. The above scenario is a perfect showcase for the problem that Google created by sidelining OpenCL.
It will be interesting to see what Nvidia cooked up with three years of R&D.
Remember how some sites claimed that Project Denver is pretty much dead? And how we kept on posting news about it regardless? Well, guess again, because Nvidia has just scheduled an event under the banner of “NVIDIA’s Denver Processor”. So unless they have scheduled an entire event just to say that the project is canceled, this here is the official reveal of Nvidia’s homegrown “super dual core”.
Jebus I just sat up in the anticipation of reading something new and interesting about Denver and all I get is that? Hasn't that wccftech site anything better to present than the bleedingly obvious or some plagiarized stuff for decoration in between?
Is anyone still using standard ARM CPU cores any more or have they all moved to custom designs?
That is for better performance or perf/watt?