NVIDIA Tegra Architecture

Most but not all :rolleyes::LOL:

That quoted sentence wasn't about the Tegra 5's iGPU, it was about the Kayla board, which has a GK208.
All it says is that GK208 doesn't have all the CUDA features that a Tesla K20 GPU.
(I don't know what "features" could be missing, but it could be just performance-related factors)

You can't use it as proof that Tegra 5 won't have a full-fledged Kepler iGPU.
 
You can't use it as proof that Tegra 5 won't have a full-fledged Kepler iGPU.

Feel free to prove the exact opposite then.

All it says is that GK208 doesn't have all the CUDA features that a Tesla K20 GPU.
(I don't know what "features" could be missing, but it could be just performance-related factors)
There are quite a few candidates that would make sense to miss for the moment in order to keep die area and power consumption more reasonable; DP could be one of them.

Still don't believe Tegra 5 has a real Kepler GPU, despite all the evidence to the contrary?

So where's the "evidence" exactly?
 
Feel free to prove the exact opposite then.
I'm not willing to prove something I'm not sure about :)


There are quite a few candidates that would make sense to miss for the moment in order to keep die area and power consumption more reasonable; DP could be one of them.
DP in GK208 is already at 1/24th of SP performance. You suggest lowering that performance even further or cutting DP capabilities altogether?


So where's the "evidence" exactly?

1 - nVidia repeatedly calls it a Kepler GPU
2 - Kayla has a full GK208 and nVidia gave it to developers to get used to CUDA 3.5 in an ARM environment. It wouldn't make much sense to tell developers to learn CUDA 3.5 with an ARM SoC and then present a SoC with a reduced featureset. For example, FP64 is probably mandatory in CUDA 2 and up.
 
I'm not willing to prove something I'm not sure about :)

Meaning it's one side's uncertainty vs the other side's uncertainty. Given that power consumption is THE utmost critical factor which scenario is likelier? A fitted Kepler "approximation" or the entire enchilada with insane transistor density crammed into a handful of square millimeters?

DP in GK208 is already at 1/24th of SP performance. You suggest lowering that performance even further or cutting DP capabilities altogether?
Kepler GPUs have dedicated FP64 units in order to save power; I don't know what the design looks like as probably none apart from NV employees but I don't see an absolute necessity yet for FP64 for SFF mobile.

1 - nVidia repeatedly calls it a Kepler GPU
Let me double check: yes it's NVIDIA we're talking about which has NEVER EVER stretched the truth or lied at all in its history? Yes others do it too and its called marketing, but NV doesn't have exactly a humble track record in that regard.

2 - Kayla has a full GK208 and nVidia gave it to developers to get used to CUDA 3.5 in an ARM environment. It wouldn't make much sense to tell developers to learn CUDA 3.5 with an ARM SoC and then present a SoC with a reduced featureset. For example, FP64 is probably mandatory in CUDA 2 and up.
And there are ever repeating notes from NVIDIA itself that it's NOT a 1:1 copy. So hello blame me for calling it an approximation of the Kepler architecture and not the real thing. I don't know whether FP64 is mandatory for any of the CUDA versions or not, but since you folks seem to "know" something I don't I'd love to hear about it.

In the end it's both for NV's and Logan's benefit if the ULP GF in Logan is a custom design with strong resemblances in some parts to Kepler; it's all it really needs and it would be far better for perf/mW perf/mm2 and perf/$ ratios, but I'm obviously talking to a wall here.
 
Oh you mean that CEO that showed a Fermi SKU mounted with woodscrews? Point taken; now I'm definitely convinced.

That's a failed analogy: Fermi was real and claims Nvidia made about its capabilities were in fact true.

As I said a few months ago, I'm going to enjoy your retraction once Tegra 5 hits the market.

Although Nvidia has never said Tegra 5 will be CUDA capability 3.5: only that it's Kepler derived, which means >= 3.0.
 
And did Fermi eventually come out or did they never release it?

Yes it did, but it doesn't mean that I don't have ANY single reason to not doubt his or NV's credibility in the end.

If you can't tell the difference between the two situations, then I certainly won't bother anymore. Frankly, neither should anyone else but that is their business... http://www.youtube.com/watch?v=8kIQWWJs_po&feature=player_detailpage#t=257s

Beyond OpenGL 4.3 is there something else you are critically anticipating?
If you can't understand what I'm pointing at then it's truly not worth bothering at all. In the end eventually we will see how the Logan GPU looks like after its launch. Given my track record as a simple user I will stand up and apologize in public if I'm wrong; let's see then what happens with you gentlemen if it shouldn't be the case.
 
Ailuros said:
Yes it did, but it doesn't mean that I don't have ANY single reason to not doubt his or NV's credibility in the end.
Then it is impossible to have a conversation with you on this topic as ALL information regarding nvidia architectures has nvidia as its primary source.

Ailuros said:
I will stand up and apologize in public if I'm wrong
I would rather you just be a reasonable human being in the first place...

But if you insist, by what criteria, exactly, are we to judge that you are wrong?

The architecture is supposedly Kepler based, so if the feature set is greater than or equal to GK104 are we to interpret that as you being wrong?
 
I've nothing against the description "Kepler based" or derivative or anything similar thereof; I've explained in several posts what kind of feature/performance/power consumption balance I expect for it. I nowhere or ever doubted the feature-set by itself. You might want to read back in this very thread what I've said on the topic and you might actually see whether there's any reasoning to it or not.

And since you mention GK104 I recall me being with my "speculations" over it more accurate than any trash tabloid page out there before its launch; just a pure coincidence of course too :rolleyes:
 
So basically you won't say anything specific enough to be considered wrong, or qualify every thing you do say with a maybe, but expect it to be accepted as fact?

Ailuros said:
I've explained in several posts what kind of feature/performance/power consumption balance I expect for it

Perhaps you could just copy and paste 1 single concrete prediction where you expect the Tegra 5 GPU to be "sub-kepler" architecturally.

Otherwise, wtf are you going on about?
 
All it says is that GK208 doesn't have all the CUDA features that a Tesla K20 GPU.
(I don't know what "features" could be missing, but it could be just performance-related factors)

The only feature that appears to be missing is ECC.
Usable FP64 performance could be construed as a feature that is present or not but that's stretching things ; all GPUs without exceptions since Fermi have supported FP64, too. And maybe if FP64 is unused in your code that's just a little bit of dormant silicon which doen't hurt you much.

I don't know if mythical code that uses double precision once in a while for limited, critical parts actually exists or not. Nor if there will be maybe students developping Tesla code (soon to be obsoleted by Maxwell) on Tegra 5 laptops.
 
So basically you won't say anything specific enough to be considered wrong, or qualify every thing you do say with a maybe, but expect it to be accepted as fact?

I am not going to repeat over and over again what I've said on the topic. If you are interested read into it and find out.

Perhaps you could just copy and paste 1 single concrete prediction where you expect the Tegra 5 GPU to be "sub-kepler" architecturally.

Why should I bother? You barged into the debate while not even having a clue what I have actually said and even better if it makes sense with quite an aggressive tone. Now you expect me to waste further time to send you links for it or copy/paste content for it? The database is there, read into it.

Otherwise, wtf are you going on about?

That's a question I should be actually asking given the above.
 
Ailuros said:
The database is there, read into it.
I looked. I did not see anything of merit.

I'll interpret your response as my being correct.

Perhaps you could just copy and paste (or make) 1 single concrete prediction where you expect the Tegra 5 GPU to be "sub-kepler" architecturally.
 
If I would had wasted more time I would had found times more relevant material in this very thread; just a few recent ones which must have been very hard to find:

http://forum.beyond3d.com/showpost.php?p=1763859&postcount=1322
http://forum.beyond3d.com/showpost.php?p=1764004&postcount=1333
http://forum.beyond3d.com/showpost.php?p=1764113&postcount=1336
http://forum.beyond3d.com/showpost.php?p=1764130&postcount=1339


Perhaps you could just copy and paste (or make) 1 single concrete prediction where you expect the Tegra 5 GPU to be "sub-kepler" architecturally.

I expect the ULTRA LOW POWER GeForce to be exactly what its name implies: the lower possible power consumption for the highest possible functionality. If they won't take any shortcuts the result is not going to fit into a =/<tablet form factor or else =/<5W TDP. There are no magic wands last time I checked and yes sorry I stepped by mistake on Tinkerbell.
 
Not a single one of those posts or your statement describes anything functionally inferior to Kepler.

All you seem to be saying is that as a mobile chip it will be lower performance than a desktop part with greater power efficiency. Those are vague and rather obvious statements of no substantive worth.


Perhaps you could just copy and paste (or make) 1 single concrete prediction where you expect the Tegra 5 GPU to be "sub-kepler" architecturally.
 
Ailuros is only saying that the GPU in Tegra 5 won't be a 1 on 1 copy of the Kepler architecture. That's what the original argument was about. I don't remember who said it or anything and I'm too lazy to reread this thread, but someone implied that the GPU in Tegra 5 would be a 1 or 2 Kepler SMX units with hardly any alterations. At least no alterations that would affect functionality. That's what started the discussion, with Ailuros saying that that would be absurd.

Can we move on now?
 
Back
Top