Nvidia Tegra

Graphics power on par with GeForce 6 Series.

Pigs can fly after all apparently; I hope you realize that there are lightyears of differences between an embedded GPU of any kind and a desktop GPU.

Future Tegra will be GT300-based, I would expect tegra 2 to be so.

Really? I think you missed a spot on the Tegra roadmap then probably.
 
Graphics power on par with GeForce 2.6 Series.

Fixed...

There's no way in hell it'll hit even remotely close to that - at best we're still atleast an order of magnitude shy. Now could it be *based* on Geforce6 tech? Yeah - it could also be steam-driven, or clockwork, though neither of those is likely either.
 
Well in terms of features supported it could very well be a G8x/9x derivative. It's just that GF6 series performance is fairly ridiculous.
 
Well in terms of features supported it could very well be a G8x/9x derivative. It's just that GF6 series performance is fairly ridiculous.

I'd say so - These GPUs already support parallax mapping and everything you'd expect for a standard PC GPU from only a year or two ago, so it isn't preposterous to conclude GF8x/9x feature similarity for future Tegra-type devices, but the performance quote I'd say definitely is unreasonable at this moment in time.
 
geforce 6100 or 6200 performance sounds less outlandish. A mere generation number doesn't really give away a performance level : do we refer to the 4 pipe or 24 pipe part, or to the 8 SP or 240 SP part? which bandwith? ;)

Pigs can fly after all apparently; I hope you realize that there are lightyears of differences between an embedded GPU of any kind and a desktop GPU.



Really? I think you missed a spot on the Tegra roadmap then probably.

sure, I should have checked that. stuff I missed out, that "T2" or "gen 2"
http://www.pcinpact.com/actu/news/51785-nvidia-tegra-android-windows-smartbook.htm
http://www.dvhardware.net/article37225.html

though no real clue on what gen it is.
I don't believe they will do a G9x, that tegra gen I missed might be the same, done with more pipes and bandwith.
 
well I'm now believing it's about the CPU. go from a single ARM11 to a dual Cortex, and here's a magical 4x figure. Low power is kept by moving to 40nm.

The CPU grunt is actually what to be interested in, the current one looks a tad weak for quake 3 framerates and web browsing.
Think of the ubiquitous core2duo laptops with Intel IGP. Those are everyday-man's workhorse, with video and graphics abilities (the Sims, warcraft, left4dead are run on those machines)

There's potential there. I'm now wishing for a Tegra nettop, < 10 watts!
 
Last edited by a moderator:
well I'm now believing it's about the CPU. go from a single ARM11 to a dual Cortex, and here's a magical 4x figure. Low power is kept by moving to 40nm.

The CPU grunt is actually what to be interested in, the current one looks a tad weak for quake 3 framerates and web browsing.
Think of the ubiquitous core2duo laptops with Intel IGP. Those are everyday-man's workhorse, with video and graphics abilities (the Sims, warcraft, left4dead are run on those machines)

There's potential there. I'm now wishing for a Tegra nettop, < 10 watts!

What speaks against a unified shader core especially if you'd start thinking in the OpenCL/GPGPU direction?
 
I'm now wishing for a Tegra nettop, < 10 watts!
I don't know how accurate it is, but according to Notebook Hardware Control my Fujitsu P1510 usually draws < 10 watts. By usually I mean when reading a web page and the system is mostly idle. This is with a ULV Pentium M.
 
What speaks against a unified shader core especially if you'd start thinking in the OpenCL/GPGPU direction?

Nothing. My argument is that if they do so they will jump straight to GT300, and I would bet $1 they will do so for that Tegra 2 in early 2010 :).

That way there's one gen less to support (development, libraries, driver), and it's better at OpenCL and Cuda.
 
Nothing. My argument is that if they do so they will jump straight to GT300, and I would bet $1 they will do so for that Tegra 2 in early 2010 :).

That way there's one gen less to support (development, libraries, driver), and it's better at OpenCL and Cuda.

It would be a very interesting twist in the entire story; but with not even the D12U being available yet aren't you a wee bit too optimistic with that one?

Don't waste that dollar ;)
 
NVIDIA: Big Tegra launch coming in Q1 2010

NVIDIA: Big Tegra launch coming in Q1 2010
No more infos at the moment but I suspect they will announce new line of SoC's(tegra 2) and maybe they will show first smartphone running WM6.5.1(beta 7). Maybe Arun knows something about this big launch ;)

Can't wait to see what will happen during CES and MWC!
 
Assuming he's thinking of a chip launch as you assume (same here TBH), I think it's very simple to figure out what will happen based only on public information. NVIDIA has claimed they were going to have a major new generation every ~12 months, with derivatives for that generation coming ~6-9 months after the first part. The reason why the Tegra2 generation hasn't been publicly announced is they wanted to focus on APX2600 at MWC09 and not reduce the hype for products coming out this year just because they were about to sample a chip that won't be available in end-products for some time. So now they're going to announce everything at once much closer to end-product launch in MWC10 (and maybe CES10 for the netbook chip?)

Since they've hinted several times there would be a specialized MID/netbook chip this generation (although it could also be used in flagship smartphones to compete with OMAP4 I'm sure) and they'll likely also want a lower-end chip to expand their addressable market, this gives us three chips this generation (at least). Then they have indicated Tegra3 one year after that, which corresponds nicely to TSMC's claim that the first 28LPG tape-outs would happen in 1Q10 (Qualcomm & NV likely being in the first batch).

Based on a basic understanding of market trends and a tiny bit of info from the grapevine about timeframes, this gives us the following *speculative* roadmap:
1) 65nm, 1xARM11, 720p+ Decode, 720p Encode, 2xTMU/2xVS, 32-bit LPDDR1. Tape-out 2H07, Sampling January 2008.
2) Derivative AP16, basically same thing but bugfixes/minor goodies. Tape-out 3Q08(?).
3) 40nm 1xA9, 1080p Decode, 1080p Encode, 4xTMU/2xVS, 32-bit LPDDR2. Tape-out 4Q08.
4) 40nm 2xA9, 1080p High Profile Decode, 1080p Encode, ?xTMU/?xVS, 64-bit 1.35v DDR3 or LP-DDR2. Tape-out mid-2009.
5) 40nm 1xA9, 720p Decode, 720p Encode, 1xTMU/1xVS, 16-bit LPDDR2. Tape-out 2H09.
6) 28nm 4xA9, 2x1080p High Profile Decode, 1080p (HP?) Encode, Next-Gen GPU(?), 64-bit LPDDR2 or 1.35v DDR3. Tape-out 1H10.

This compares favorably in terms of functionality and timing with the competition if correct, although as always the big question remains die size, power, and the pricing strategy. Given what they achieved on these fronts on 65nm, I'm quite optimistic - but as always, the competition is definitely tough. Given some of the great things being done by pureplay baseband and connectivity companies though, I'm very optimistic that standalone application processors do have a very bright future though. They won't get locked out; if anything, they'll actually be advantaged.

There, is that enough information overload for you? ;)
 
Wow that's more than I expected :smile:
But if you're so generous about informations maybe some scoops about snapdragon at 28nm. They'll probably announce it around MWC'10 and release it in devices year later.
Do you know what we can expect from snapdragon3 ;) ?
 
Wow that's more than I expected :smile:
But if you're so generous about informations maybe some scoops about snapdragon at 28nm. They'll probably announce it around MWC'10 and release it in devices year later.
Do you know what we can expect from snapdragon3 ;) ?
Well, there won't be a single 28nm tape-out before 1Q10, so sampling in 2H10 and first MID devices in 2H11 wouldn't be a bad bet. If they bypass 3G certification by shipping the first chips just as application processors, 3Q11 isn't out of reach. Either way, it'll obviously be a long time before we see either it or Tegra3 in end-products, sadly.

I don't really know anything about that chip's architecture. I can speculate on how they'll handle LTE in that timeframe though: one chip with HSPA+, another with HSPA+/EV-DO/LTE. There's not a single EV-DO operator that won't care about LTE by then and not a single HSPA+ that will care about EV-DO but there'll still be a large die size cost to LTE, so that'll still be the best approach by far. Regarding CPU/3D/video, I wish I knew. They really haven't talked about what they're doing on these fronts beyond 45nm; Qualcomm's high-end 45nm chips are a very big change in terms of 3D/video IMO, so maybe this will be more incremental. Or maybe not and I'll be very pleasantly surprised! :)
 
What do you mean by a very big change in terms of 3D/video ?
New video codecs, performance improvements? I thought that it was still the same tensilica xtensa core...
Isn't the GPU based on quad-pipeline z430? Snapdragon2 is supposed to be 4x faster than the snapdragon1 so it would seem so.

Are my assumptions wrong?
 
These Semis continue to be wildly, overly optimistic on their timetable for process transition, now for the 28nm generation.
 
Back
Top