Tegra 3 officially announced; in tablets by August, smartphones by Christmas

How many monitors, let alone 10" tablets and netbooks will have that kind of resolution anytime in the lifespan of Tegra 3?
 
Doubtful. Not many have followed even the retina on the iPhone4. Apple simply has an economy of scale that others don't and my guess is that 2048x1440 displays will be prohibitively expensive for most OEM's for well beyond the lifespan of Tegra 3.

We'll likely see 1080p first and even that may not be for a while.
 
BSN's Wayne "leak" is completely ridiculous. I could come up with something more credible in five minutes. 8xA9 isn't theoretically possible (without essentially making them two separate nodes - arguably viable for servers but even Calxeda didn't bother) and 8xA15 on 28nm is going to take *way* too much power. Even a quad-core A15 is very aggressive before 20nm.

But hey, go on most blog sites and watch how people anticipate the 1440p video capabilities and tout how "fast" Kal-el is.
Yeah, it is very silly - especially when 1440p 30fps (or is it 24fps? I'm not entirely sure) is really just reusing most of the same silicon as 1080p 60fps or 3D 1080p 30fps. I'd even be surprised if the APQ8064 couldn't do it as well with the proper software work (unless the video architecture is less flexible than it was in the 40nm generation).

Mind you 1440p isn't *completely* useless. I'm typing this on a 27" 2560x1440 PC monitor from Dell with a HDMI input. I'm still not quite sure why I'd connect my smartphone to it though...
 
Did you even read the article? It states there is to be 2 versions of Wayne. Clearly the "Robin" version with 4 A15 cores @ ~1.5Ghz is the one meant to be 2x faster than Kal-El...I'm not saying they are correct, but if you are going to rake them over the coals, the least you could do is actually read the article carefully.

Yes I wasted my time reading it. And the above detail changes what exactly? Read Arun's last reply above it might help.
 
DPI has certainly increased, even though it is no where near Apple's levels.
Not in the high-end. We've been stuck around 800x480 for a long time despite screen sizes going up (and the Motorola Atrix 4G using Pentile makes it even worse despite being 960x540). The only reason all the early Android phones used 480x320 is that's the only resolution the OS supported back then so it doesn't really count. There were feature phones with 800x480 displays before - heck, I remember being quite impressed by the LG KM900 Arena at MWC09 which had a 3.0" 800x480 screen: http://www.gsmarena.com/lg_km900_arena-2666.php (and an AMD Imageon coprocessor btw - one of their last major design wins).
 
nVidia deems it sufficient to dub them "cores", so I don't see why not.

If you have a half way decent explanation how and why an ALU vector lane can be justified to be a core then well by all means go ahead.

In either case, my point was that if you double the number of shaders in Kal-el, that still would be enough just to catch up to SGX543MP2.

No it won't. There are 36 "cores" under that silly marketing definition in the MP2, capable of both pixel and vertex shading.
 
Tegra's lanes/pipes are all on the same core as far as I can tell.

Why their marketing decides not to treat that as the advantage it actually is (no MP overhead) and instead play a "count the cores" contest -- a competition no other company is actually playing -- in order to put themselves on a comparable scale to competitors is ridiculous.
 
Ailuros said:
If you have a half way decent explanation how and why an ALU vector lane can be justified to be a core then well by all means go ahead.
All they're doing is using the 5 year old desktop terminology that was introduced by ATI with the introduction of R600 and applying it to mobile GPUs...

PowerVR or their licensees will eventually discover the power of marketing too. :)
 
Arun said:
Yeah, it is very silly - especially when 1440p 30fps (or is it 24fps? I'm not entirely sure) is really just reusing most of the same silicon as 1080p 60fps or 3D 1080p 30fps. I'd even be surprised if the APQ8064 couldn't do it as well with the proper software work (unless the video architecture is less flexible than it was in the 40nm generation).
I don't get the hostility towards a well executed marketing coup: no matter how high the increase in performance of a new mobile chip, it's hard to demonstrate this in a visual way at a trade show. ("Oooh, if you look carefully with a microscope, you can see that it now supports this profile video decode instead of that profile decode.")

Sure, the high resolution may not be practically useful, but for the less informed, it's a compelling demonstration of the increased power inside. And the fact that we're talking about it shows that it worked.
 
I don't get the hostility towards a well executed marketing coup: no matter how high the increase in performance of a new mobile chip, it's hard to demonstrate this in a visual way at a trade show.
Oh, absolutely agreed - personally I rather meant that websites that believe this is a good proof of superior overall performance are being very silly. Then again it is true that the OMAP4470 and MSM8960 only support 2D 1080p 30fps whereas 1440p support implies Kal-El will very likely also support 3D 1080p 24fps. That's slightly more useful because it should make it possible to run Blu-ray 3D streams without transcoding.

As for terminology - I'm pretty sure AMD still talked about SPs in the R600 timeframe, not cores. It's NVIDIA which started with this.
 
All they're doing is using the 5 year old desktop terminology that was introduced by ATI with the introduction of R600 and applying it to mobile GPUs...

G80 was released before R600, so it's more like NVIDIA started that one too. At least with those two "stream processors" affect USC ALUs, else you can use each both for vertex as for pixel shading. The ULP GeForce in Tegra2 has according to NV "8 cores" which probably stand for 1 Vec4 PS ALU and 1 Vec4 VS ALU. In other words when marketing violates terminology here it goes from dumb to dumber.

PowerVR or their licensees will eventually discover the power of marketing too. :)

Depends if they should change strategy. Samsung initially marketed the SGX540 in its S5C1x0 as being capable of >90M Tris/s before the SoC even shipped. When it shipped the manual stated correctly 20M Tris/s, so there must have been some sort of interference to set the record straight.
 
nVidia deems it sufficient to dub them "cores", so I don't see why not. In either case, my point was that if you double the number of shaders in Kal-el, that still would be enough just to catch up to SGX543MP2.

Marketing idiocy asside, the statement "doubling the number of shader pipes" doesn't really give you enough information to decide how well it'll perform. For example if they're not going unified and they double the VS lanes from 4 to 8 and the PS lanes from 8 to 16, then they're still well behind on per clock flops availabel for any single op. Now, if they've gone unified and we're talking 24 real scalar pipes then they're going to be at 75% of the per clock flops of a 543 MP2 which scalar vs vector efficiency may be able to compensate for, BUT throw in overdraw reduction and lower memory bandwidth associated with PowerVR TBDR and it's possible that it will still struggle against 543 MP2.

John.
 
Arun said:
As for terminology - I'm pretty sure AMD still talked about SPs in the R600 timeframe, not cores. It's NVIDIA which started with this.
Duh, right. I completely forgot about the SP term.
 
Marketing idiocy asside, the statement "doubling the number of shader pipes" doesn't really give you enough information to decide how well it'll perform. For example if they're not going unified and they double the VS lanes from 4 to 8 and the PS lanes from 8 to 16, then they're still well behind on per clock flops availabel for any single op. Now, if they've gone unified and we're talking 24 real scalar pipes then they're going to be at 75% of the per clock flops of a 543 MP2 which scalar vs vector efficiency may be able to compensate for, BUT throw in overdraw reduction and lower memory bandwidth associated with PowerVR TBDR and it's possible that it will still struggle against 543 MP2.

John.

Well yes, we're all speculating here and I was speaking of best-case-scenario. It's not going to blow anything away except maybe the current crop of Android devices.
 
Back
Top