NVIDIA Tegra Architecture

In that case I would love to know how it runs Windows 8 RT, since that mandates ps_2_0, which in turn requires 24-bit.

Microsoft is not crystal clear on that topic but my understanding is that ps_2_0 requires FP24, as does ps_4_0_level_9_2. However ps_4_0_level_9_1 would work with FPwhatever.

D3D1x.x ps_4_0_level_9_x are slightly different than D3D9.x ps_2_x and shaders have actually to be compiled against both profiles but precision doesn't matter there.
 
In that case I would love to know how it runs Windows 8 RT, since that mandates ps_2_0, which in turn requires 24-bit.
http://forum.beyond3d.com/showpost.php?p=1613873&postcount=15

I asked this last year when nVidia's Tegra 3 Windows RT tablets were announced but never really got a good response at that time.

Microsoft is not crystal clear on that topic but my understanding is that ps_2_0 requires FP24, as does ps_4_0_level_9_2. However ps_4_0_level_9_1 would work with FPwhatever.

D3D1x.x ps_4_0_level_9_x are slightly different than D3D9.x ps_2_x and shaders have actually to be compiled against both profiles but precision doesn't matter there.
http://www.tomshardware.com/reviews/tegra-4-tegra-4i-gpu-architecture,3445-3.html

That appears to be correct. According to Tom's Hardware, nVidia is only claiming Direct3D 9_1 support for Tegra 4 where FP20 is sufficient. Tegra 4 does support a number of 9_3 level features, like instancing, which they can't currently expose in Windows given the definition of the feature levels. Apparently nVidia is trying to convince Microsoft to allow some type of exception/compromise to expose their cherry-picked feature set, which I thought was precisely what Microsoft wanted to prevent when they eliminated capability bits in DX10 and later instituted well-defined, cross-vendor baseline feature levels.

EDIT:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/jj714085(v=vs.105).aspx

On Windows Phone 8, all devices have GPUs that support feature level 9_3.
...
If an app creates a Direct3D graphics device and requests a feature level higher than 9_3, the device will be successfully created on the emulator. However, when the app is run on the phone, it will not be able to create a Direct3D graphics device with a feature level higher than 9_3, so some code that requires higher feature level functionality could work on the emulator but not work on a physical device.
An interesting note is that while Windows RT mandates a minimum of Direct3D 9_1, Windows Phone 8 actually requires Direct3D 9_3 on the dot. That would presumably mean both Tegra 4 and 4i are shut out from the Windows Phone market, which depending on how that OS grows in the next year or two, may or may not be important.
 
Last edited by a moderator:
I'm still a bit confused, because 9_1 is still (unless I'm mistaken) ps_2_0 and thus 24-bit. Time to write some shaders I think.
 
http://forum.beyond3d.com/showpost.php?p=1613873&postcount=15

I asked this last year when nVidia's Tegra 3 Windows RT tablets were announced but never really got a good response at that time.


http://www.tomshardware.com/reviews/tegra-4-tegra-4i-gpu-architecture,3445-3.html

That appears to be correct. According to Tom's Hardware, nVidia is only claiming Direct3D 9_1 support for Tegra 4 where FP20 is sufficient. Tegra 4 does support a number of 9_3 level features, like instancing, which they can't currently expose in Windows given the definition of the feature levels. Apparently nVidia is trying to convince Microsoft to allow some type of exception/compromise to expose their cherry-picked feature set, which I thought was precisely what Microsoft wanted to prevent when they eliminated capability bits in DX10 and later instituted well-defined, cross-vendor baseline feature levels.

EDIT:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/jj714085(v=vs.105).aspx


An interesting note is that while Windows RT mandates a minimum of Direct3D 9_1, Windows Phone 8 actually requires Direct3D 9_3 on the dot. That would presumably mean both Tegra 4 and 4i are shut out from the Windows Phone market, which depending on how that OS grows in the next year or two, may or may not be important.

Well that truly is terrible from nvidia...so it looks like ms dropped the d3d feature level down in w8rt for tegra 3 over say an s4 pro...
....I guess thats because of nvidias experience with windows drivers...I suppose stability and a solid start took priority over slightly more performance and feature set....(if only qualcomm got their drivers into gear..grr)

Ive looked at some gaming on rt and suprisingly it looks as good if not better than android in quality. ..even if tegra 3 struggles at points.

Microsoft I feel has made a piss poor decision with soc choice with their mobile lineup...they picked the original snapdragon processor for wp7 when the adreno 205 chip was being launched and the msm8660 and tegra 2 were just round the corner. ..that meant the outdated snapdragon was the starting point (lowest common denominator) when developing games..and also likely cost the platform sales due to worse hardware than android/ios.

On all 3 clean slate os starts (wp7/8 & w8 rt ) microsoft had significantly better soc options just around the corner which would have greatly improved app development in comparison to the competition.. (due to lowest common denominator being the very latest soc & feature set).

They rushed the decision..only really drivers made any sense on w8 rt...even then they could have gone tegra 3+...but didn't??
 
Who cares really? I mean since they claim themselves that the difference between FP20 and FP32 is negligable (I'd swear they didn't claim anything similar during the NV30/R300 timeframe :p ) and we're trying to find whether 4 bits up or 4 bits down. All you need to know for now is that FP20 is perfectly sufficient for fragment shading because they say so and when they'll finally go for USCs they'll re-invent the wheel for FP32.

Even that is besides the point. I wonder if they even themselves believe the bullshit they're touting. I think in Tegra2 the GPU block was roughly 1/5th of the entire SoC die estate. That was with 1 Vec4 FP20 PS ALU, 1 Vec4 FP32 VS ALU, 2 TMUs at 333MHz under 40nm.

http://www.anandtech.com/show/4144/...gra-2-review-the-first-dual-core-smartphone/3

Could I have a similar die shot from T4 for comparison?

SoC.jpg


Now what I am supposed to believe here is that going from 40 to 28nm was enough to not ONLY reduce the GPU block to 1/8th of the SoC die area estate, but at the same time units increased to 12 Vec4 FP20 PS ALUs, 6 Vec4 FP32 VS ALUs, 4 TMUs and to that at 672MHz. I'm all eyes if someone would like to give me even an educated guess how something like that can be accomplished since I must be missing something essential here.
 
Well that truly is terrible from nvidia...
I struggle to see where Nvidia went terribly wrong on this one. I doubt WinRT was very much on the radar when Tegra 3 was being defined and it's not as Angry Birds suffers major image quality loss when using a 20 or 24 bits shaders.

If Microsoft decided to use Tegra 3, it's clear that they didn't care very much either. And neither does the market: I don't think I read a single Surface RT review lamenting the lack of pixel precision.

so it looks like ms dropped the d3d feature level down in w8rt for tegra 3 over say an s4 pro...
....I guess thats because of nvidias experience with windows drivers...I suppose stability and a solid start took priority over slightly more performance and feature set....(if only qualcomm got their drivers into gear..grr)
Good point: what's worse? The inconsequential lack of pixel precision or the lack of stability? Which one would be most damaging to Microsoft's brand?
 
I struggle to see where Nvidia went terribly wrong on this one. I doubt WinRT was very much on the radar when Tegra 3 was being defined and it's not as Angry Birds suffers major image quality loss when using a 20 or 24 bits shaders.

If Microsoft decided to use Tegra 3, it's clear that they didn't care very much either. And neither does the market: I don't think I read a single Surface RT review lamenting the lack of pixel precision.


Good point: what's worse? The inconsequential lack of pixel precision or the lack of stability? Which one would be most damaging to Microsoft's brand?

I think its the lamentation of having tegra 3 with single channel memory rather s4 pro with dual...much better chip although as pointed out obviously no drivers.

Microsoft just seems to have poor timing when releasing recent os...last gen technology...would have even been better to have tegra 3 plus with dd3L than plain old tegra 3.
 
I think its the lamentation of having tegra 3 with single channel memory rather s4 pro with dual...much better chip although as pointed out obviously no drivers.
Exactly. If lack of pixel precision is terrible, you're in danger of running out of superlatives fast for the stuff that matters.

That also means that Tegra 4 should be fine wrt graphics for whatever Surface or Android product that comes next.
 
Exactly. If lack of pixel precision is terrible, you're in danger of running out of superlatives fast for the stuff that matters.

That also means that Tegra 4 should be fine wrt graphics for whatever Surface or Android product that comes next.

Yes I agree tegra 4 should be decent performance wise...all im saying is nvidia/ms arnt pushing the envelope like they do on the desktop.

Im curious to see what smartphone designs tegra 4 shows up in..what power consumption is..and what concessions to frequency/performance have been made to get them there.
 
Discussion of Tegra's roadmap

Yes I agree tegra 4 should be decent performance wise...all im saying is nvidia/ms arnt pushing the envelope like they do on the desktop.

As far as I can tell it's only nVidia that has been pushing for custom enhanced versions of mobile games to take advantage of faster hardware. Everybody else just codes for the lowest common denominator. What exactly do you want Microsoft and nVidia to do differently?

In general it looks like all these SoC manufacturers are trying to sell hardware that very few people really want or need. I always wonder why people care about GLBenchmark scores on a phone. What exactly can you do on a phone with a badass GPU?

PPI on mobile devices is already soaring well past PC monitors. All of this graphics performance is just going to be wasted on rendering detail that nobody can see. I don't really see the point to be honest. I would love someone to come up with a compelling use case for very fast SoC graphics but I don't see one right now.
 
In general it looks like all these SoC manufacturers are trying to sell hardware that very few people really want or need. I always wonder why people care about GLBenchmark scores on a phone.

Why do folks use public benchmarks? Note that question is use and NOT abuse :p

What exactly can you do on a phone with a badass GPU?

Play mobile games while on the move?

PPI on mobile devices is already soaring well past PC monitors.

You're not going to place a smartphone in a 2 meters distance to play any sort of game are you? Extremely high PPI are extremely useful for text amongst others.

All of this graphics performance is just going to be wasted on rendering detail that nobody can see. I don't really see the point to be honest.

Just because you fail to see the point it doesn't mean that some folks aren't killing some time on the move watching videos, playing games or whatever else with their mobile device.

I would love someone to come up with a compelling use case for very fast SoC graphics but I don't see one right now.

When you don't see a point from the get go how high are the chances you'd find any sort of compelling use for high graphics performance in SFF mobile devices anyway?
 
As far as I can tell it's only nVidia that has been pushing for custom enhanced versions of mobile games to take advantage of faster hardware. Everybody else just codes for the lowest common denominator. What exactly do you want Microsoft and nVidia to do differently?
http://www.pocketgamer.co.uk/r/iPad/Need+for+Speed:+Most+Wanted/feature.asp?c=46679
http://www.pocketgamer.co.uk/r/iPad/Modern+Combat+4:+Zero+Hour/feature.asp?c=47563

Arguably, the reason nVidia has to ask/help developers make custom enhanced versions of mobile games for their SoC is because Apple doesn't need to ask for developers to do it for their PowerVR GPUs. Need for Speed Most Wanted and Modern Combat 4, both high-end mobile games from two different big developers, run with noticeably better graphics on the SGX543MP2 iPad Mini compared to the Tegra 3 Nexus 7 while still maintaining playable performance. iOS vs. Android market pressures may be the initiating reason why games tend to turn out better on iOS more than inherent PowerVR vs nVidia differences, but presumably PowerVR Android devices get the iOS graphics enhancements for free since it's already in the code. nVidia pushing a selection of enhanced games may be less an attempt to come out on top of competing GPUs, but more an attempt to keep up with the silent presence of PowerVR-optimized games.

In general it looks like all these SoC manufacturers are trying to sell hardware that very few people really want or need. I always wonder why people care about GLBenchmark scores on a phone. What exactly can you do on a phone with a badass GPU?
Well putting aside games, nVidia is saying that it enables more fancy camera features.
 
I don't think you're following me Ail. I play a lot of games on my phone and use my Nexus 7 every day. I'm not questioning the need for GPUs. Just the need for faster ones - in response to the comment about pushing the envelope.

All of the things you mentioned - mobile gaming, higher definition text - don't require blockbuster graphics performance. What can we do on tomorrow's SoC's that we can't do today?
 
Play mobile games while on the move?
Angry Birds!

Just because you fail to see the point it doesn't mean that some folks aren't killing some time on the move watching videos, playing games or whatever else with their mobile device.
Because watching videos and whatever else they do on their mobile device stresses current generation GPUs?

When you don't see a point from the get go how high are the chances you'd find any sort of compelling use for high graphics performance in SFF mobile devices anyway?
I'm fine with GPU perf increasing, of course. But I totally get why Nvidia decided to favor CPU over GPU for Tegra 3. Being known for a PC GPU company doesn't mean that you shouldn't first get the performance fundamentals right. And for the vast majority of users, things are still CPU bound. This is no different than Intel with PCs: focus on CPU first, once that levels off, start increasing the GPU.
 
I don't think you're following me Ail. I play a lot of games on my phone and use my Nexus 7 every day. I'm not questioning the need for GPUs. Just the need for faster ones - in response to the comment about pushing the envelope.

All of the things you mentioned - mobile gaming, higher definition text - don't require blockbuster graphics performance. What can we do on tomorrow's SoC's that we can't do today?

Ok then since now it's a bit clearer: something for CAD on the move and maybe a pinch of ray tracing capabilities on the move here and there?
 
Back
Top