NVIDIA Tegra Architecture

It doesn't affect just Tegra but I'd love to see in future tests finally some image quality comparisons in 3d. How hard is it exactly to investigate texture filtering quality on a ULP GPU as just one example?
That and thorough perf/mW tests preferably with properly warmed up hw to single out the throttle cases.
I "love" reviews with just Antutu results...
 
It doesn't affect just Tegra but I'd love to see in future tests finally some image quality comparisons in 3d. How hard is it exactly to investigate texture filtering quality on a ULP GPU as just one example?
I'd like to see quality comparisons too, clearly the mobile solutions are at a point where you shouldn't get away with everything quality-wise.
That said I'd suspect anisotropic filtering to be crappy on GeForce ULP, they've cut corners on arithmetic precision in shader ALUS and I see no reason they wouldn't cut loads of corners with anisotropic filtering. But I would expect such shortcuts right now for others too.
 
I'd like to see quality comparisons too, clearly the mobile solutions are at a point where you shouldn't get away with everything quality-wise.
That said I'd suspect anisotropic filtering to be crappy on GeForce ULP, they've cut corners on arithmetic precision in shader ALUS and I see no reason they wouldn't cut loads of corners with anisotropic filtering. But I would expect such shortcuts right now for others too.

Shortcuts aren't a crime if there aren't any unwanted side effects in what you get on screen.

As for the rest I am obviously talking about K1 and there precision is more than enough.
 
If QCOM won't react then who would?

http://www.fudzilla.com/home/item/33658-qualcomm-dismisses-tegra-k1-benchmarks

Now where's that popcorn smiley when you need it?

Usual marketing BS from QCOM.

Since these benchmarks were done on the "Tegra K1 reference tablet" they blow QCOM's BS about mobile power envelopes right out of the water.

Tegra K1 ‘SuperChip’ First Benchmarks Surface – Almost 4 Times Faster than Tegra 4, Blows the Competition Away

http://wccftech.com/tegra-k1-superchip-benchmarks-revealed-4-times-faster-tegra-4/

EDIT: Adding more info on the reference K1 tablet

NVIDIA Tegra K1 reference tablet has 4GB RAM, full 1920x1200 HD display

http://liliputing.com/2014/01/nvidia-tegra-k1-reference-tablet-4gb-ram-full-hd-display.html

Interestingly, NVIDIA’s new reference design uses the same case as the Tegra Note 7, suggesting that the company’s new chips offer higher performance without generating significantly more heat or taking up more space.
I really like that they went with a 16 x 10 display.
 
Last edited by a moderator:
Last edited by a moderator:
Usual marketing BS from QCOM.

Since these benchmarks were done on the "Tegra K1 reference tablet" they blow QCOM BS about mobile power envelopes right out of the water.

Tegra K1 ‘SuperChip’ First Benchmarks Surface – Almost 4 Times Faster than Tegra 4, Blows the Competition Away

http://wccftech.com/tegra-k1-superchip-benchmarks-revealed-4-times-faster-tegra-4/

EDIT: Adding more info on the reference K1 tablet

NVIDIA Tegra K1 reference tablet has 4GB RAM, full HD display

http://liliputing.com/2014/01/nvidia-tegra-k1-reference-tablet-4gb-ram-full-hd-display.html

That's a claimed result by NVIDIA which they've added into that graph; in the meantime save your breath until real 3rd party benchmark results appear from final devices; I might not like PR stunts like the one from QCOM but they still have a point whether you like it or not.

In the meantime - and since QCOM's core business is actually for smartphone SoCs - you might want to enlighten me how many smartphone design wins you'd expect for K1.
 
Qualcomm must be scared as hell, Nvidia had all the Tegra K1 reference tablets at their CES booth running all the demos.

http://www.youtube.com/watch?v=Pfp_ZFs7DIA

Then again, what do you expect from a company know for gems like these?

http://www.engadget.com/2013/08/02/qualcomm-anand-chandrasekher-eight-core-processors/

http://news.cnet.com/8301-1001_3-57606567-92/qualcomm-retracts-gimmick-comment-on-apple-64-bit-chip/

And NV is the mother of all innocence when it comes to questionable PR/marketing? You might want to read a wee bit more history for the past decades. Yes QCOM is mighty scared of NV while up until now having about 33x times NV's marketshare for the ULP SoC market. Exactly because of their market position I consider that kind of PR stunt redundant.
 
Considering that Qualcomm has a near-strangehold on worldwide LTE (in part because some major carriers in the USA still rely on CDMA tech), the word "scared" really doesn't make too much sense. Qualcomm is still in the driver's seat, but at least NVIDIA can finally eclipse them in certain key areas with Tegra K1.

That said, I personally believe that Qualcomm is off with their analysis of the Lenovo prototype. Based on the CPU and GPU clock operating frequencies intended for Shield v2, the Lenovo prototype already has at least 15% lower max frequencies in comparison.
 
Last edited by a moderator:
That said, I personally believe that Qualcomm is off with their analysis of the Lenovo prototype. Based on the CPU and GPU clock operating frequencies intended for Shield v2, the Lenovo prototype already has at least 15% lower max frequencies in comparison.

I wouldn't suggest that the SoC in the Lenovo AIO has active cooling, but there's no note anywhere about it either in the original Tomshardware writeup. Nor is there of course any guarantee either that Lenovo WON'T ship the final product in July with full frequencies.

If NV would have a working sample of a Shield2 working sample why didn't they show it at CES?

Finally most important who is exactly as naive to believe that Qualcomm or any other competitor out there DOES NOT know what their competition is exactly cooking? Nearly everyone behind the curtains is pondering on the same thing as Qualcomm here does, it's just that none of them will go out with a PR reply for it. As I said considering QCOM's market position the PR reply is essentially redundant, but then again was there ever in the past years a Tegra related marketing presentation that didn't run direct comparisons against Qualcomm's products?

Those 15% you keep harping on can easily get covered with the difference of 28HPm vs. 20SoC at TSMC in H2 14'.

***edit: and since I can already sense possible answers to the last sentence, in how many cases A vs. B will be more bandwidth limited is real mobile games is the real question in the end. 25 is still more than 17GB/s.
 
Kepler has always been very bandwidth efficient (relatively speaking), and Kepler.M is more of the same (with various bandwidth-saving techniques including the on-board unified L2 cache), so hard to say for sure.
 
Kepler has always been very bandwidth efficient (relatively speaking)...

Relatively speaking the 780Ti has 330GB/s and the 290X 320GB/s bandwidth; now I won't pick any straws about that rather pitiful difference between the two, but I'd have it easier to swallow your claim if any Kepler's SKU would have SIGNIFICANTLY less bandwidth than any Radeon counterpart.

, and Kepler.M is more of the same (with various bandwidth-saving techniques including the on-board unified L2 cache), so hard to say for sure.

Gee I wonder then why the whitepaper ponders as much on the TXAA/FXAA el cheapo blur abominations and not Multisampling for a change :LOL:

If it would be "more of the same" it would be capable of more than 1 triangle every 2 clocks (or 0.5 Tri/clock, while 1 z/clock). I don't think Damien Triolet invented it either.
 
I wouldn't suggest that the SoC in the Lenovo AIO has active cooling, but there's no note anywhere about it either in the original Tomshardware writeup. Nor is there of course any guarantee either that Lenovo WON'T ship the final product in July with full frequencies.

Finally most important who is exactly as naive to believe that Qualcomm or any other competitor out there DOES NOT know what their competition is exactly cooking?

Competitors being what they are, they will always try to diminish what the other folks are doing. It is interesting however, that when asked to comment on the K1, IMG on their blog took a similar stance as Qualcomm did :-

".....Before I do, I think it's worth pointing out that Lenovo ThinkVision 28 is not a tablet. It's a professional touchscreen monitor that acts as an AIO (All In One) PC - essentially a desktop computer with the monitor and processor in the same case.

These devices have slightly different specifications; for example, they can incorporate active or passive cooling and thus can handle higher power consumption and dissipation. They can also be (and typically are) clocked higher than a traditional smartphone/tablet (a form factor usually below 13", much thinner and battery-powered). I think it's important we wait until we have an apples to apples comparison (smartphone vs. smartphone or tablet vs. tablet) before jumping to conclusions related to performance."
 
now I won't pick any straws about that rather pitiful difference between the two, but I'd have it easier to swallow your claim if any Kepler's SKU would have SIGNIFICANTLY less bandwidth than any Radeon counterpart.

How quick you forget. Look at GTX 680 vs. HD 7970.
 
How quick you forget. Look at GTX 680 vs. HD 7970.

How good you are at comparing apples vs. oranges. Do you think Tahiti has bandwidth in excess for 3D or rather to better serve its compute capabilities? Now compare the two in compute cases and see the 680s ass handed in with flying colours :rolleyes:
 
How good you are at comparing apples vs. oranges. Do you think Tahiti has bandwidth in excess for 3D or rather to better serve its compute capabilities? Now compare the two in compute cases and see the 680s ass handed in with flying colours :rolleyes:

That is a pretty silly comment considering we are talking about gaming perf. and not DP compute perf.

And FWIW, GTX 680 was not always behind in "compute" perf:

http://images.anandtech.com/graphs/graph5699/45166.png

http://images.anandtech.com/graphs/graph5699/45165.png
 
Last edited by a moderator:
Gee I wonder then why the whitepaper ponders as much on the TXAA/FXAA el cheapo blur abominations and not Multisampling for a change :LOL:
It's funny, why do you need any MSAA with > 320 DPI screens at all?
TXAA by the way is MSAA + custom resolve filter with temporal reprojection, blur won't be noticeable with resolutions equal or higher than 720p with small 5-10 inches displays. I've seen temporal reprojection AA in some games and it's alone provide very good image quality with no cost in mobile games like Dragon Slayer on Android and Real Racing 3 on iOS. Pure console gamers playing in games with sub 720p resolutions on 50 inches TVs for years, I wonder how pure casual mobile gamers will handle blur with TXAA or FXAA on 10 inches displays:rolleyes:
 
That is a pretty silly comment considering we are talking about gaming perf. and not DP compute perf.

AMD doesn't have separate chips for each cases and NV also. For that time Tahiti was AMD's highest end single core chip. With what kind of chips did they feed professional markets?

But you can always try a "smart" analysis of your own what else a Tahiti needs as much bandwidth for while having "only" 32 ROPs for 3D.


***edit:

If that's all you got to justify your claims you better want something more substantial before you call someone else's comment as "silly".
 
It's funny, why do you need any MSAA with > 320 DPI screens at all?

It was a marketing centric observation and for the record's sake the lowest common denominator supports right now MSAA at a reasonable performance penalty; whether it's used in mobile games by ISVs in games is a chapter of its own.

TXAA by the way is MSAA + custom resolve filter with temporal reprojection, blur won't be noticeable with resolutions equal or higher than 720p with small 5-10 inches displays. I've seen temporal reprojection AA in some games and it's alone provide very good image quality with no cost in mobile games like Dragon Slayer on Android and Real Racing 3 on iOS. Pure console gamers playing in games with sub 720p resolutions on 50 inches TVs for years, I wonder how pure casual mobile gamers will handle blur with TXAA or FXAA on 10 inches displays:rolleyes:
I'd rather have no AA at all than have to deal with any form of TXAA or FXAA.
 
Back
Top