Did Transmeta die too early?

liolio

Aquoiboniste
Legend
Yesterday I was reading some posts on RWT about a new CPU architecture "the mills", I read some presentations on the matter and it was over my head. Though discussing the possible merits (or downfalls after reading the few comments on the matter on RWT) is not my point, that reading sort of brang back to my memory those old Transmeta processors.

I decided to read the few wiki entries on the matter as well one review of the efficeon processor. Actually I got pretty impressed by what they were doing back in time.
I could not find much and actually I could not find a proper review of their last product (the one built on a 90nm process running up to 1.7GHz).

After those short reads I'm close to wonder if Transmeta main downfall was to have shipped products that were way ahead of their times. It looks to me that back in time reviews were a lot less focused on power consumption than today.

Going by the results of that review, I would think that this statement from wiki may not be pushing it too far:
A 2004-model 1.6-GHz Transmeta Efficeon (manufactured using a 90-nm process) had roughly the same performance and power characteristics as a 1.6-GHz Intel Atom from 2008 (manufactured using a 45-nm process).[26] The Efficeon included an integrated Northbridge, while the competiting Atom required an external Northbridge chip, reducing much of the Atom's power consumption benefits.
So what happens? The market was not ready? Market distortion through anti competitive practices? Miss management? May be a bit too threatening (I mean they could have competed against MIPS, PPC, X86, given proper funding)? Or simply the faith in Moore law and in 10GHz CPUs was too strong back in time (/back to the market is not ready).

Looking at how both Intel and AMD are fighting get their power consumption down, my feel is that it is a real sad story that the company could not find more funding to keep going for an extra couples of years, I would think that product of that company would be competitive with ARM based offering, actually I would be extremely interested in seeing what their tech could do on a modern process. They sadly died just ahead of the "mobile revolution" and before power consumption turned into the leading concern in "computing at large".

Anyway do you think that it was a fair fate for the tech or that History moved wrongly away from one of the most forward looking approach to computing of the last decade?
 
Last edited by a moderator:
Very interesting! I would say they failed because of a lack of use cases: the netbook was still to be invented, Android did not exist and Symbian never needed that many processing power. So in the end, yes, there was no market for it.
 
Very interesting! I would say they failed because of a lack of use cases: the netbook was still to be invented, Android did not exist and Symbian never needed that many processing power. So in the end, yes, there was no market for it.
Indeed they shipped in close to ultrabook form factor. I remember a couple of my users (women with business position travelling a lot) telling a lot of good about those laptop and how the Toshiba we got as replacement sucked (bulky, hot, etc.). Actually even the last Portege we deployed did not get that much praise (they are light, neat form factor but get quite hot, and can break easily...).
 
Last edited by a moderator:
It wasn't just launching too early but that CPU had a whole host of it's own problems. Like the fact that it throttled so aggressively and so early in order to maintain that power envelope. Taking those into account it really isn't terribly competitive with the later first generation Atom which Intel spent almost no effort with regards to reducing power consumption.

It was also a massively huge die for the performance it offered leading to some abysmal perf/mm^2 numbers. Something like the Via C3, for example, traded blows with it depending on the benchmark, yet was much less than half the size.

I doubt they would have fared any better later on than they had then.

Regards,
SB
 
Isn't performance per watt a much better performance target for mobile devices anyway?

We have come to a point where it makes sense to increase die size for more performance, e.g. running multiple clusters at a lower speed, rather than a single cluster at higher speed (for example Intel Iris Pro and ImgTec MPx graphic solutions).

I suppose they would have faired better today, giving that their target market still was being defined in 2008.
 
Isn't performance per watt a much better performance target for mobile devices anyway?

We have come to a point where it makes sense to increase die size for more performance, e.g. running multiple clusters at a lower speed, rather than a single cluster at higher speed (for example Intel Iris Pro and ImgTec MPx graphic solutions).

I suppose they would have faired better today, giving that their target market still was being defined in 2008.

That's the thing though. The Efficeon was never able to realize the claimed performance in the real world because the CPU would throttle aggressively and early in order to maintain the claimed power envelope.

And as mentioned that was just one of its problems. The whole translation layer introduced other compatibility problems with some programs as well as speed of execution issues.

And size is going to be pretty darn important unless cost doesn't matter at all. And considering that one of the major advatages that ultra mobile Arm CPU's have over Intel is cost, you cannot underestimate the effects of cost on whether a given solution will be able to attain mass adoption. If your die size is more than double that of your closest competitor (in terms of performance) then the cost is also going to be significantly more for a given profit margin. You could, of course, then slash your profit margin, but can you stay in business if you slash your profit margin too much in order to gain mass adoption of your chip?

Regards,
SB
 
Good points, I suppose a large die strategy would really only be an option for Intel and Apple at this point in time.
 
It wasn't just launching too early but that CPU had a whole host of it's own problems. Like the fact that it throttled so aggressively and so early in order to maintain that power envelope. Taking those into account it really isn't terribly competitive with the later first generation Atom which Intel spent almost no effort with regards to reducing power consumption.

It was also a massively huge die for the performance it offered leading to some abysmal perf/mm^2 numbers. Something like the Via C3, for example, traded blows with it depending on the benchmark, yet was much less than half the size.

I doubt they would have fared any better later on than they had then.

Regards,
SB

That's the thing though. The Efficeon was never able to realize the claimed performance in the real world because the CPU would throttle aggressively and early in order to maintain the claimed power envelope.

And as mentioned that was just one of its problems. The whole translation layer introduced other compatibility problems with some programs as well as speed of execution issues.

And size is going to be pretty darn important unless cost doesn't matter at all. And considering that one of the major advantages that ultra mobile Arm CPU's have over Intel is cost, you cannot underestimate the effects of cost on whether a given solution will be able to attain mass adoption. If your die size is more than double that of your closest competitor (in terms of performance) then the cost is also going to be significantly more for a given profit margin. You could, of course, then slash your profit margin, but can you stay in business if you slash your profit margin too much in order to gain mass adoption of your chip?

Regards,
SB
I don't think that die size of those things were huge, Crusoe came in 2 flavors 77 and 73 mm^2, Efficeon @90nm was 68mm^2. The only chip that was "huge" was the 130nm version as I guess 1MB of L2 was pushing it on such an antiquated process. Though I don't think it was insanely big either.
The review I found give no information in that regard actually they compare CPU looking at the "pad" size which by nowadays standard is let say weird.

So those CPU were not big by any mean, and I wonder what ARM was providing at that time that would make those late Efficeon looks bad.

I read a couples of things about those chips and it seems that the main offender as far as compatibility was concerned was the Crusoe chips, Efficeon fixed most of those issue.

I think that a lot of the points that you raise and are raised in the review I linked earlier are sort of irrelevant to the merit of the chip. The chip was a "real mobile" product, with really low power consumption, advanced power management, etc. (Edit by the way that is why I chose that section and not the "processor and chipset" one).
As Wiki points out it got compared to mostly vanilla desktop CPU at a time where power measurements were not as refined as they are now in an era of mobile products. It was really a laptop only type of product (and it could have done more, more to come on the topic).

I would argue that the technology was held by the constrains of that time, in the review I read (TM 8600 so the 90nm part @ 1GHz) they look at the normalized battery life for the laptop they were reviewing. They compare the whole platform not the CPUs.
Pretty much there were other power hogs in those systems: south bridge, GPU, optical drive, HDD and so on.
Now Transmeta could not do anything about this, I think a lot of there power advantage were lost in the "noise", the thing is that with that power characteristics, along with up to date components, the thing could be fitted in a tablet which none of the CPU it is matched against could do, the device would melt.

About the throttling issue, I agree that they might have been a bit too aggressive with their TDP figure, some more headroom would have helped, though it also have to do with the expectations of that time. It is a laptop/mobile chip and pretty much like with tablets and phones nowadays how the device is used has nothing to do with the type of workloads used to benchmark it.
As far as I can tell the users (or people) that were interested in that type of products wanted a pretty mobile device, they were not doing anything compute intensive, I suspect that the end user doesn't care for throttling. It is the same thing as turbo in modern CPU, the CPU turbo so the device looks snappy, if you run one bench and turbo kicks in you may see really nice results though if you were to do that for a while throttling would kick in. Does costumers cares? No.
The people I knew that like that type of products were mostly doing e-mail and a some not advanced Office work.

So overall and from my POV, a lot of the issue are more related to perception of people in the IT field than the end users needs. Now we understand thing like TDP and the difference with SDP, that in fact the needs of a lot of users are quite conservative and so on.
Transmeta got there too early, Press was not ready, actually I would say hardware was not ready either as lack of flash storage, the still standard use of optical drive, power hungry display I think nullifies a lot of "wins" of those CPUs.

I still think it is sad, they went out of business before the mobile revolution, soon after those 90nm Efficeon shipped and lots of the tech early issues were solved.
I wish we could have a proper review (using nowadays standards) of competing laptops of that time, with proper breakdown of power consumption (CPU in insulation) and heat characteristics (lots of laptops still get imo way too hot).

Again Wiki claims (so that is just that) that those 63mm^2 chips @90nm had the same performances as a 2008 Atom @45nm (per cycle) which in turn are (remotely though) competitive with ARM offering, that is damned impressive.
Again looking at where we are now almost a decade later on 32/28nm process, even unchanged the chip could be quite something and incredibly tiny. Now one should wonder about an up to date rendition of the technology would achieve, my personal bias would be that actually they could have topped the race.

Anyway I would bet that the people involved in that adventure have to be incredibly bitter, I guess they might be proud too: they were a little company, they got funding, they shipped 4 generations of products, and so on, still... I would think that the most "bitter" part is the buy out of their tech by what looks like a "patent troll" company as it meant that their techs were no longer to have a proper life.

I think that some companyies missed a great opportunity by not buying that company, Intel, AMD, Nvidia and MSFT would all have benefited from having access quite early to competent and low power "X86" CPU. Actually I would think it would serve equally well as a ARM v7 (or even V8) CPU too (looking at Nvidia efforts to develop an ARM CPU or AMD that is to use OTS parts...).
 
Last edited by a moderator:
If Anand is right it could be that Nvidia own ARM CPU cores use the same approach as transmeta used back in time.
I know NV marketing department can be quite over the top but may be is really 8 wide though internally (more in the blow sense than the number of execution ports).
Denver unveilling "en bonnes et dues formes" is something I'm eagerly waiting for out of plain geekiness :)
 
Going back even further there was the Chromatic Research MPACT which was a VLIW processor used on a multifunction card that could do all sorts of things including audio, MPEG2, 3D, 2D, modem, etc. It failed for most of the same reasons as Efficeon: the software development was very complex and costly, and the chip was big and inefficient compared to ASICs.

http://en.wikipedia.org/wiki/Chromatic_Research
http://vintage3d.org/chromatic.php#sthash.V9BgkCSI.dpbs

But Chromatic Research was bought by ATi and maybe the (adapted) VLIW hardware was used within the R9700 and newer GPU's.
 
But Chromatic Research was bought by ATi and maybe the (adapted) VLIW hardware was used within the R9700 and newer GPU's.

Maybe. ATI bought a lot of failing companies back then. R300-R580 are very graphics-specific though. NV30 was VLIW too.
 
But Chromatic Research was bought by ATi and maybe the (adapted) VLIW hardware was used within the R9700 and newer GPU's.
(necromancer) So it may still live in those Qualcomm adreno, it is alive 8O.
Back to Transmetta I'm looking forward for more in depth review of Denver cores.
 
Back
Top