NVIDIA Tegra Architecture

It was a marketing centric observation and for the record's sake the lowest common denominator supports right now MSAA at a reasonable performance penalty; whether it's used in mobile games by ISVs in games is a chapter of its own.

A "reasonable performance penalty"??? LOL, what is that supposed to mean?

I'd rather have no AA at all than have to deal with any form of TXAA or FXAA.

Yeah, well obviously you would say that. May as well go back to the stone age too. You do realize he is talking about 5-10" high PPI screens?
 
A "reasonable performance penalty"??? LOL, what is that supposed to mean?

There are performance results out there if you want to find them.

Yeah, well obviously you would say that. May as well go back to the stone age too. You do realize he is talking about 5-10" high PPI screens?

I realize perfectly well what he said and I am claiming.
 
Relatively speaking the 780Ti has 330GB/s and the 290X 320GB/s bandwidth; now I won't pick any straws about that rather pitiful difference between the two, but I'd have it easier to swallow your claim if any Kepler's SKU would have SIGNIFICANTLY less bandwidth than any Radeon counterpart.
Why do you bring radeon efficiency in this mobible soc discussion ?
here, it's Adreno vs Tegra :rolleyes:
 
Why do you bring radeon efficiency in this mobible soc discussion ?
here, it's Adreno vs Tegra :rolleyes:

Why are you asking me in the first place? It wasn't me that initially claimed that desktop Keplers are more bandwidth efficient. Read backwards and then ask me again.
 
Competitors being what they are, they will always try to diminish what the other folks are doing. It is interesting however, that when asked to comment on the K1, IMG on their blog took a similar stance as Qualcomm did :-

".....Before I do, I think it's worth pointing out that Lenovo ThinkVision 28 is not a tablet. It's a professional touchscreen monitor that acts as an AIO (All In One) PC - essentially a desktop computer with the monitor and processor in the same case.

These devices have slightly different specifications; for example, they can incorporate active or passive cooling and thus can handle higher power consumption and dissipation. They can also be (and typically are) clocked higher than a traditional smartphone/tablet (a form factor usually below 13", much thinner and battery-powered). I think it's important we wait until we have an apples to apples comparison (smartphone vs. smartphone or tablet vs. tablet) before jumping to conclusions related to performance."
yeah and at the same time, QC and IMG forget to mention that K1 was displayed at CES in updated Tegra Note 7 with FHD screen running, 4Gb RAM and same form factor of previous year tablet with T4. All demos ran without active cooling solution. Even worst, they also forget to mention that Nvidia claims 60fps in T-rex bench on this new Tegra Note 7... :rolleyes:
 
Why are you asking me in the first place? It wasn't me that initially claimed that desktop Keplers are more bandwidth efficient. Read backwards and then ask me again.
talking about reading ? and how about that: it was not me that said anything... read again...
it was ams that talked about efficiency. I read again his message and nowhere he talked about Radeon. He talked about Kepler efficiency and how Kepler mobile brings new techniques to save even more bandwidth. Then YOU bring Radeon to the table in a mobile SoC thread... :rolleyes:

edit: Tegra k1 new bandwidth saving techniques:
tegra-k1-compression_1.jpg

tegra-k1-compression_2.jpg
 
It was a marketing centric observation and for the record's sake the lowest common denominator supports right now MSAA at a reasonable performance penalty;
And why do you think that K1 will not provide those levels? MSAA essentially is rendering at higher sample rate but only on geometry edges, so it could be considered as rendering at higher than native resolution, MSAA perf could be evaluated as function of Resolution *(multiplied on) MSAA samples count per pixel * percentage of MSAA pixels (i.e. usually 4xMSAA perf at some resolution is equal to performance at 1.3-1.4x higher resolution in case of forward rendering), then we can see this picture - http://media.bestofmicro.com/L/U/418098/original/GFXBench27OffscreenSorted.png , 48 fps at 1080p and 16 fps with 4x higher resolution, just a 3 times perf drop with 4x higher resolution, it's not bandwidth bound in most heavy mobile benchmark, so why it should be bad with MSAA? Do you think that tilers don't shade MSAA pixels or what?

I'd rather have no AA at all than have to deal with any form of TXAA or FXAA.
I'd rather have better shaders at 540p with FXAA or temporal AA, rather than pure shaders at 1080p with 8x MSAA. I suppose console developers on my side
 
Well, most people's scepticism towards NVIDIA here is pretty easily understood. I mean, they've promised so much in the past, but they never ever really delivered. That alone makes me sceptical about the claims they are making this time as well. That's the main reason why I will refrain from really saying anything regarding Tegra K1 until I've seen comparable devices being compared to each other.

EDIT: Was meant as a response to xpea's post, but he has edited in the meantime.
 
talking about reading ? and how about that: it was not me that said anything... read again...
it was ams that talked about efficiency. I read again his message and nowhere he talked about Radeon. He talked about Kepler efficiency and how Kepler mobile brings new techniques to save even more bandwidth. Then YOU bring Radeon to the table in a mobile SoC thread... :rolleyes:

Here's the post I replied to: http://forum.beyond3d.com/showpost.php?p=1821057&postcount=1972
He clearly also meant desktop Kepler. That said we are obviously back on topic in the meantime and we don't need to get back to it necessarily.
 
Well, most people's scepticism towards NVIDIA here is pretty easily understood. I mean, they've promised so much in the past, but they never ever really delivered. That alone makes me sceptical about the claims they are making this time as well. That's the main reason why I will refrain from really saying anything regarding Tegra K1 until I've seen comparable devices being compared to each other.

EDIT: Was meant as a response to xpea's post, but he has edited in the meantime.
Very well said Helmore. I fully agree with you.

PS: I have edited my message to be more "neutral" and to not start another flame war between Nvidia haters and the others... but it's very annoying to have these haters jumping on any NV related info before independent test comes to validate/unvalidate NV claims...
 
And why do you think that K1 will not provide those levels? MSAA essentially is rendering at higher sample rate but only on geometry edges, so it could be considered as rendering at higher than native resolution, MSAA perf could be evaluated as function of Resolution *(multiplied on) MSAA samples count per pixel * percentage of MSAA pixels (i.e. usually 4xMSAA perf at some resolution is equal to performance at 1.3-1.4x higher resolution in case of forward rendering), then we can see this picture - http://media.bestofmicro.com/L/U/418098/original/GFXBench27OffscreenSorted.png , 48 fps at 1080p and 16 fps with 4x higher resolution, just a 3 times perf drop with 4x higher resolution, it's not bandwidth bound in most heavy mobile benchmark, so why it should be bad with MSAA? Do you think that tilers don't shade MSAA pixels or what?

Again my reply was marketing centric; do you see any mention of MSAA in any of the K1 marketing material? Yes their competitors gloat about high sample amounts like ARM for instance mentioning 16xMSAA this time.

I'd rather have better shaders at 540p with FXAA or temporal AA, rather than pure shaders at 1080p with 8x MSAA. I suppose console developers on my side
Since device sizes aren't increasing as rapidly as resolutions on ULP mobile devices (hell there are already 5.5" 2k smartphones about to ship) I don't think I absolutely need any polygon edge or polygon intersections AA at all. It's just that when you get hammered from multiple sides you don't have spare time to give a reasonable explanation. What I want to see FIRST above all for my taste is good anisotropic filtering in textures and if there's still performance left for even better shaders let them be my guest by all means. No idea about anyone else I'm just pretty bored to see mostly bilinear/trilinear filtered textures.

Even for AF enabled we are still in developers hands and what priorities they'll set. However they'll still take into account the lowest common denominator for any of their mobile games than anything else.

Other than that: http://forum.beyond3d.com/showpost.php?p=1820787&postcount=1962 and yes anyone who'd think NV would have a disadvantage with AF is naive. With GPUs starting with 1 quad TMU at the lowest level its high time for a bit more filtering love in mobile games.
 
It's funny, why do you need any MSAA with > 320 DPI screens at all?
Aliasing is still visible on high resolution screens since the artifacts it introduces may be spread over the entire displayable frequency range. They're not confined to just the highest frequency (which optics and your visual system would filter out when viewed from sufficient distance).

And for jagged edges you have to take hyperacuity into account.
 
Denver patent: non-native instructions

Reposted from Maxwell thread as it belongs here in the Tegra thread more than the Maxwell thread.


Looks like Nvidia's put those ex-Transmeta employee's talents to good use.

http://en.wikipedia.org/wiki/Transmeta

NVIDIA Licenses Technologies from Transmeta Corporation

Nvidia taps Transmeta team for x86 chip, claims analyst
http://www.theregister.co.uk/2009/11/04/nvidia_transmeta_x86

It is interesting in that the above patent was filed in May 2012 and just published Nov 2013 so it does look like it is part of the Denver project.

Now onto the good stuff, what non-native instructions do you think it is being referred to?

One illustrative example of a non-native ISA that processing system 10 may be configured to execute is the 64-bit Advanced RISC Machine (ARM) instruction set; another is the x86 instruction set. Indeed, the full range of non-native ISAs here contemplated includes reduced instruction-set computing (RISC) and complex instruction-set computing (CISC) ISAs, very long instruction-word (VLIW) ISAs, and the like. The ability to execute selected non-native instructions provides a practical advantage for the processing system, in that it may be used to execute code compiled for pre-existing processing systems.
X86 is specifically mentioned but I thought that the $1.5 billion patent licensing between Nvidia and Intel prevented that.
 
yeah and at the same time, QC and IMG forget to mention that K1 was displayed at CES in updated Tegra Note 7 with FHD screen running, 4Gb RAM and same form factor of previous year tablet with T4. All demos ran without active cooling solution. Even worst, they also forget to mention that Nvidia claims 60fps in T-rex bench on this new Tegra Note 7... :rolleyes:

To be fair, I think both were commenting on the Toms hardware benchmarks, and these were run on the AIO device.
 
Is Tegra K1 without PCIe, like Tegra 4?
What was nice with Tegra 3 is it had 4x PCIe lanes, else it's not a very interesting chip in 2014. This allows SATA, ethernet etc.

At that point, with the GK110-like graphics part and the ARMv8 CPU in K1 Denver, you have something credible to run a netbook or even desktop computer with a full desktop OS (those not made by Microsoft or Apple)
I can't help but liking keyboards, hard drives etc.
Denver would be a bit more fun with an mSATA SSD and a 1.5TB laptop HDD, and something to plug a DAC or sound card into. (can be USB)
 
I'd rather have no AA at all than have to deal with any form of TXAA or FXAA.

EDIT: As you pointed out this was said in the context of mobile graphics on tiny screens with high ppi. In that context I do understand and agree with the preference of no AA at all in favour of using that power elsewhere.

Original Post:

No offense but that's ridiculous. You're comparing this:
To this:


Click on the images to expand them to full resolution and full screen then multiply the difference by 4x to account for the motion.

There's simply no comparison whatsoever. I'm not at all vendor bias but TXAA is simply spectacular. Graphics take on a hand drawn look while using it that far exceeds any other method of AA I've seen to date.
 
TXAA and SMAA are definitely a notch above grandma's FXAA or MLAA.

The screens above are a greatly exaggerated comparison. Unfortunately AC4 is quite unstable on my PC at the highest settings and I need to switch it back to PS4 level settings when it starts to get that way (which seems to stabilize things). So I've quite a lot of experience of switching between SMAA (PS4) and TXAA 2x (the best my PC can manage) and the difference is huge (to my eyes).

It's not something I'd payed a lot of attention to before now but now that I've come to understand the difference, I see TXAA as one of the most significant graphical settings you can apply to a PC game. Even with the fairly massive FOV I game at the image quality is completely sublime.
 
SMAA has various feature levels. Look at Crysis 3 PC for a full implementation. But even the basic setting is better than MLAA and FXAA. Which it should be cuz it is a superior form of MLAA.

I have not seen TXAA because it isn't supported on any of my hardware. My newest NV chip is a 560.
 
Is Tegra K1 without PCIe, like Tegra 4?
What was nice with Tegra 3 is it had 4x PCIe lanes, else it's not a very interesting chip in 2014. This allows SATA, ethernet etc.

At that point, with the GK110-like graphics part and the ARMv8 CPU in K1 Denver, you have something credible to run a netbook or even desktop computer with a full desktop OS (those not made by Microsoft or Apple)
I can't help but liking keyboards, hard drives etc.
Denver would be a bit more fun with an mSATA SSD and a 1.5TB laptop HDD, and something to plug a DAC or sound card into. (can be USB)
"The full GPIO breakdown for Tegra K1 includes essentially all the requisite connectivity you’d expect for a mobile SoC. For USB there’s 3 USB 2.0 ports, and 2 USB 3.0 ports. For storage Tegra K1 supports eMMC up to version 4.5.1, and there’s PCIe x4 which can be configured" (c) Anand
 
Back
Top