I'd like to see quality comparisons too, clearly the mobile solutions are at a point where you shouldn't get away with everything quality-wise.It doesn't affect just Tegra but I'd love to see in future tests finally some image quality comparisons in 3d. How hard is it exactly to investigate texture filtering quality on a ULP GPU as just one example?
I'd like to see quality comparisons too, clearly the mobile solutions are at a point where you shouldn't get away with everything quality-wise.
That said I'd suspect anisotropic filtering to be crappy on GeForce ULP, they've cut corners on arithmetic precision in shader ALUS and I see no reason they wouldn't cut loads of corners with anisotropic filtering. But I would expect such shortcuts right now for others too.
If QCOM won't react then who would?
http://www.fudzilla.com/home/item/33658-qualcomm-dismisses-tegra-k1-benchmarks
Now where's that popcorn smiley when you need it?
I really like that they went with a 16 x 10 display.Interestingly, NVIDIA’s new reference design uses the same case as the Tegra Note 7, suggesting that the company’s new chips offer higher performance without generating significantly more heat or taking up more space.
Usual marketing BS from QCOM.
Since these benchmarks were done on the "Tegra K1 reference tablet" they blow QCOM BS about mobile power envelopes right out of the water.
Tegra K1 ‘SuperChip’ First Benchmarks Surface – Almost 4 Times Faster than Tegra 4, Blows the Competition Away
http://wccftech.com/tegra-k1-superchip-benchmarks-revealed-4-times-faster-tegra-4/
EDIT: Adding more info on the reference K1 tablet
NVIDIA Tegra K1 reference tablet has 4GB RAM, full HD display
http://liliputing.com/2014/01/nvidia-tegra-k1-reference-tablet-4gb-ram-full-hd-display.html
Qualcomm must be scared as hell, Nvidia had all the Tegra K1 reference tablets at their CES booth running all the demos.
http://www.youtube.com/watch?v=Pfp_ZFs7DIA
Then again, what do you expect from a company know for gems like these?
http://www.engadget.com/2013/08/02/qualcomm-anand-chandrasekher-eight-core-processors/
http://news.cnet.com/8301-1001_3-57606567-92/qualcomm-retracts-gimmick-comment-on-apple-64-bit-chip/
That said, I personally believe that Qualcomm is off with their analysis of the Lenovo prototype. Based on the CPU and GPU clock operating frequencies intended for Shield v2, the Lenovo prototype already has at least 15% lower max frequencies in comparison.
Kepler has always been very bandwidth efficient (relatively speaking)...
, and Kepler.M is more of the same (with various bandwidth-saving techniques including the on-board unified L2 cache), so hard to say for sure.
I wouldn't suggest that the SoC in the Lenovo AIO has active cooling, but there's no note anywhere about it either in the original Tomshardware writeup. Nor is there of course any guarantee either that Lenovo WON'T ship the final product in July with full frequencies.
Finally most important who is exactly as naive to believe that Qualcomm or any other competitor out there DOES NOT know what their competition is exactly cooking?
now I won't pick any straws about that rather pitiful difference between the two, but I'd have it easier to swallow your claim if any Kepler's SKU would have SIGNIFICANTLY less bandwidth than any Radeon counterpart.
How quick you forget. Look at GTX 680 vs. HD 7970.
How good you are at comparing apples vs. oranges. Do you think Tahiti has bandwidth in excess for 3D or rather to better serve its compute capabilities? Now compare the two in compute cases and see the 680s ass handed in with flying colours
It's funny, why do you need any MSAA with > 320 DPI screens at all?Gee I wonder then why the whitepaper ponders as much on the TXAA/FXAA el cheapo blur abominations and not Multisampling for a change
That is a pretty silly comment considering we are talking about gaming perf. and not DP compute perf.
And FWIW, GTX 680 was not always behind in "compute" perf:
http://images.anandtech.com/graphs/graph5699/45166.png
http://images.anandtech.com/graphs/graph5699/45165.png[
It's funny, why do you need any MSAA with > 320 DPI screens at all?
I'd rather have no AA at all than have to deal with any form of TXAA or FXAA.TXAA by the way is MSAA + custom resolve filter with temporal reprojection, blur won't be noticeable with resolutions equal or higher than 720p with small 5-10 inches displays. I've seen temporal reprojection AA in some games and it's alone provide very good image quality with no cost in mobile games like Dragon Slayer on Android and Real Racing 3 on iOS. Pure console gamers playing in games with sub 720p resolutions on 50 inches TVs for years, I wonder how pure casual mobile gamers will handle blur with TXAA or FXAA on 10 inches displays