Transmeta licenses Longrun and [other] technology to Nvidia

CarstenS

Moderator
Moderator
Legend
Supporter
From here:
"Transmeta Corporation (NASDAQ: TMTA) today announced that it has entered into an agreement with NVIDIA Corporation granting NVIDIA a non-exclusive license to Transmeta's Long Run and LongRun2 technologies and other intellectual property for use in connection with NVIDIA products.

The agreement grants to NVIDIA a non-exclusive and fully paid-up license to all of Transmeta's patents and patent applications, and a non-exclusive license and transfer of certain Transmeta advanced power management and other computing technologies."

Strangely, I was immediately reminded of how Transmeta managed to emulate x86 in their Crusoe-Processors without having to obtain an x86-license. Or did my memory fail me on that?
 
It says they license 'other intellectual property'.. Which amongst other things of course

While Transmeta no longer manufactures microprocessors, Transmeta microprocessor intellectual property is available for license to enable third parties to benefit from Transmeta innovations in processor design, translation, and energy efficiency. Transmeta's Microprocessor intellectual property includes:

* Unique, patented technologies enabling a "software-based microprocessor"
* Advanced binary translation technology
* Proprietary code optimization algorithms and techniques
* Proven-compatible X86-compatible architecture implementation, including extensive compatibility suites
http://www.transmeta.com/tech/microip.html

!
 
Regarding the LongRun2 aspect:
LongRun2 Technologies are a suite of advanced power management, leakage control and process compensation technologies that can diminish the negative effects of increasing leakage power and process variations in advanced nanoscale designs.
LongRun2 Technologies provides many benefits, including the ability to:
Improve Yield Distributions
Reduce active power consumption
Minimize standby power consumption
Enable rapid conversion of SOI based designs to bulk CMOS designs
It's worth pointing out that Toshiba's SpurEngine, which is manufactured on Vanilla CMOS unlike the PS3's CELL, was ported from SOI to CMOS with the help of this technology.

Regarding the CPU aspect, let's not be silly. With all due respect, reusing the Efficeon design today would be pure madness: an Efficeon would have lower performance-per-clock than an Atom while being a much larger and more power hungry design... It wasn't very good back then, and would be simply awful today. Furthermore, there's simply no way that NVIDIA could get away paying $25M for both LongRun2 and the Efficeon design (patents don't count, that's besides the point).
 
Huh?

Jawed

See the bold part of bowman's post.

Arun: who says that it has to outperform Intel and co.?

And as said, I was referring to the rumours which were around over the last year or two, in this very forum as well. It's not like I know anything real about it.
 
Personally I'd expect them to use any sort of x86 stuff to make their future GPUs as programmable as Larrabee using the same ISA.. But I don't know. It says 'other technologies' seemingly being cautious to point exactly what that is, so that's the first thing I thought of. :LOL:
 
Arun: I interpret this news to mean NV now has the means to all of the necessary bits to emulate an x86 chip, using some future GPU design. I'm not sure why you believe this means NV would only be able to reproduce their own Efficieon...
 
Arun: I interpret this news to mean NV now has the means to all of the necessary bits to emulate an x86 chip, using some future GPU design. I'm not sure why you believe this means NV would only be able to reproduce their own Efficieon...
If you know a viable way to emulate a processor optimized towards running a single MIMD instruction stream through one that is massively multithreaded and has no MIMD... Please let the world know, because that'd be worth a few trillion dollars on a bad day.
 
Arun: you're placing some artificial constraints on this situation which may not actually exist.

question: what prevents NV from creating a new architecture that is suited to utilizing this technology?
answer: nothing ;)

IOW: they don't have to adapt these methods to an existing GPU design.
 
Arun: you're placing some artificial constraints on this situation which may not actually exist.

question: what prevents NV from creating a new architecture that is suited to utilizing this technology?
answer: nothing ;)

IOW: they don't have to adapt these methods to an existing GPU design.

There is a fundamental difference between an architecture optimized towards single-thread performance and one optimized towards aggregate throughput. Furthermore, while I'd love NVIDIA to add some MIMD functionality to their DX11 generation, last time I spoke to a NV Architecture engineer he didn't feel it would be very useful (*sigh*)
 
There is a fundamental difference between an architecture optimized towards single-thread performance and one optimized towards aggregate throughput.

Yes, of course.

Furthermore, while I'd love NVIDIA to add some MIMD functionality to their DX11 generation, last time I spoke to a NV Architecture engineer he didn't feel it would be very useful (*sigh*)

No doubt the last time you spoke to an NV engineer was prior to the unveiling of Larrabee, and this subsequent acquisition of technology by NV ;)
 
No doubt the last time you spoke to an NV engineer was prior to the unveiling of Larrabee, and this subsequent acquisition of technology by NV ;)

The discussions of acquiring Long Run would have been ongoing for some time.

As for Larrabee, I don't think Nvidia is panicking just yet.
I'm pretty sure they've had a slightly better idea of what design choices Intel would have made, and what they intend to do to counter them.

No point breaking into a blind run in what will likely be a long-distance race.
 
Furthermore, while I'd love NVIDIA to add some MIMD functionality to their DX11 generation, last time I spoke to a NV Architecture engineer he didn't feel it would be very useful (*sigh*)

Not that Nvidia isn't known to throw out misdirections every now and again... David Kirk anyone?
 
This would look better on a future Tegra ARM SoC than on a GPU (let alone x86 chips from Nvidia, something i don't see happening anytime soon, if ever).
I'd gather that Nvidia is betting more and more on bringing GPU tech leadership to the ARM11/Cortex A8/A9 market than worrying about X86 (since, ironically, both Intel and AMD/ATI have exited the ARM market).

Especially now, with the iPhone "OS X", the future open source, unified Symbian platform, Windows Mobile 7, Google Android, Linux, etc, progressively changing the landscape and focus of software development away from traditional x86/PPC-based OS'es like Mac OS X "standard" and Windows Vista/"7"...
 
Especially now, with the iPhone "OS X", the future open source, unified Symbian platform, Windows Mobile 7, Google Android, Linux, etc, progressively changing the landscape and focus of software development away from traditional x86/PPC-based OS'es like Mac OS X "standard" and Windows Vista/"7"...

Moorestown is going to eat into that market, though. Apple would probably be especially interested in unifying their computer and handheld products under a single architecture and operating system. If future iPhones are not using some sort of Intel Atom SoC derivative I will eat a hat.
 
Moorestown is going to eat into that market, though. Apple would probably be especially interested in unifying their computer and handheld products under a single architecture and operating system. If future iPhones are not using some sort of Intel Atom SoC derivative I will eat a hat.
What's your address? I can ship it to you already if you want, saves time :p (more seriously, Apple has its own HW team and seems to be doing just fine maintaining the handheld MacOS X - are you implying they value ISA over releasing products that don't suck?)
 
Moorestown is going to eat into that market, though. Apple would probably be especially interested in unifying their computer and handheld products under a single architecture and operating system. If future iPhones are not using some sort of Intel Atom SoC derivative I will eat a hat.

Will you want ketchup with that ? :p

There's no way the 45nm "Moorestown" will be better, both in terms of performance per watt and pure die area than an equivalent 45nm/40nm ARM Cortex.
Even today, without counting the old power-hungry "Lakeport" northbridge and ICH7, the 45nm Atom is still a quite a few orders of magnitude more demanding from your standard battery than a 90nm/65nm ARM11 (as used on most current smartphones with Symbian and Windows Mobile, besides the iPhone).
 
Would an x86-license (apart from this deal!) be royalty-free or would Nvidia have to buy a license from Intel?
 
Back
Top