NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
Quote:
The license terms are thankfully a lot more palatable than they were with the initial X58 launch. To support SLI a motherboard manufacturer simply has to pay NVIDIA $30,000 up front plus $3 per SLI enabled motherboard sold. In turn NVIDIA gives the motherboard manufacturer a key to put in its BIOS that tells the NVIDIA display drivers that it’s ok to enable SLI on that platform.

The Scumbags...

mboard makers should charge nv $3000 for a special key they can put in their bios's that tells the chipset drivers that it’s ok to enable SLI on that platform.

ps: regarding i5 since the pci-e controler is on the cpu who it upto to enable sli cpu or board maker ?
 
Well, I guess Transmeta supported all these things. So how did they do it? Did they get a license?

I'm no expert here, but to my understanding most x86 patents are about hardware implementations. Since what Transmeta did is basically a software solution, it probably avoided many of these patents.

The Crusoe CPU is surprisingly simple and many x86's functions including the page tables are done through software.
 
was it any good the Transmeta

Low power, low performance. Performance was a lot lower than it's power consumption would have indicated however. Even with the competition at the time. In other words, power consumption didn't scale down as much as expect, or performance didn't scale up as much as expected depending on how you want to think of it.

It was a promising design but flawed, IMO.

Regards,
SB
 
To me it's a technology marvel, just like the first DRAM chip. Not very practical, but always amazing to look at. I actually bought a small notebook (very similar size to current netbooks, by the way) with Crusoe CPU. The general performance is unfortunately quite lacking, about the same speed of a half clocked Celeron (i.e. a 600MHz Crusoe is roughly the same as a 300MHz Celeron).

However, its compatibility is really amazing. I used that notebook as a normal one and I never encounter any incompatibility. It just behaves like a normal x86 processor. It's hard to believe that all x86 programs are run through emulation at a pretty reasonable performance. It's even more amazing after the underlying architecture was revealed to be a very simple VLIW instruction set.

Of course, from the outcome we now know that a pure software solution is not the way to go. Many of Transmeta's power saving tricks are not related to their "code morphing" approach, and can be used on normal hardware based x86 CPU as well. Actually many of these tricks are used in current mobile processors and resulted in some patent lawsuits between Transmeta and Intel, which Transmeta earned a settlement with Intel.
 
nAo: TMTA used binary translation to provide x86 compatibility. Their first chip never made it out the door and Crusoe was kind of crap. Their 3rd chip looked better, but by then the company was a goner.

If they had ever gotten a lot of traction, they would have gotten sued by Intel...but they didn't.

To me it's a technology marvel, just like the first DRAM chip. Not very practical, but always amazing to look at. I actually bought a small notebook (very similar size to current netbooks, by the way) with Crusoe CPU. The general performance is unfortunately quite lacking, about the same speed of a half clocked Celeron (i.e. a 600MHz Crusoe is roughly the same as a 300MHz Celeron).

However, its compatibility is really amazing. I used that notebook as a normal one and I never encounter any incompatibility. It just behaves like a normal x86 processor. It's hard to believe that all x86 programs are run through emulation at a pretty reasonable performance. It's even more amazing after the underlying architecture was revealed to be a very simple VLIW instruction set.

It's binary translation, not pure emulation. Big difference. Infrequently encountered code was emulated, commonly executed code was BT'd.

Of course, from the outcome we now know that a pure software solution is not the way to go. Many of Transmeta's power saving tricks are not related to their "code morphing" approach, and can be used on normal hardware based x86 CPU as well. Actually many of these tricks are used in current mobile processors and resulted in some patent lawsuits between Transmeta and Intel, which Transmeta earned a settlement with Intel.

TMTA never filed any of the patents that they nailed Intel on. They bought several patents from a 3rd party, which Intel was found to have violated. In fact most of TMTA's own patents proved to be worth not much.

Binary translation is very interesting, but one of the big problems was that you could either translate the x86-->VLIW or you could execute your translation, not both. Whoops.

Also, the code bloat was awful for windows. They really needed much much much bigger caches...1MB probably wasn't enough (and that was their L2, L1I was smaller).

DBT has it's place and is a great technology. Intel is basically doing minimal DBT in hardware with their stack engine and the macro-op fusion. DBT can be a great way to support some legacy crap (e.g. we don't need x87 hardware any more).

However, TMTA was founded on the idea of building a faster, simpler x86 using VLIW...based on a bunch of simulations done on SPARC. That plan never materialized and became 'lower power x86' - not a bad business plan, but much more difficult.

Intel proved that it's a hell of a lot easier to take a P3, reduce frequency and voltage and use your best bins...rather than designing a brand new processor. And when they did design a brand new processor for mobile, that was the end of TMTA. TMTA made a bunch of quite fatal mistakes along the way...but if they had a gravestone it would say 'thanks for the Pentium M'.

David
 
regarding Transmeta:

from wiki:

A 1.6Ghz Transmeta Efficeon from 2004 manufactured using a 90nm process has roughly the same performance and power characteristics as a 1.6Ghz Intel Atom from 2008 manufactured using a 45nm process. [22] The Efficeon included an integrated Northbridge, while the Atom requires an external northbridge chip (reducing much of the Atom's power consumption benefits).

So a Transmeta Efficeon CPU @ 45nm would be, most propably smaller, yet faster than a Atom CPU. Rather cool.

Efficeon @ 90nm = 68mm² ; Atom 230 @ 45nm = 26mm²
 
It's binary translation, not pure emulation. Big difference. Infrequently encountered code was emulated, commonly executed code was BT'd.

Binary translation is still emulation, I don't think the word "emulation" should be limited to a certain way of, emulation.

Binary translation is very interesting, but one of the big problems was that you could either translate the x86-->VLIW or you could execute your translation, not both. Whoops.

That's not a very big problem. If your processor is fast enough, the time spent on translation can be made a small part of total execution time.

Also, the code bloat was awful for windows. They really needed much much much bigger caches...1MB probably wasn't enough (and that was their L2, L1I was smaller).

I don't think L1 or L2 is the major problem. Actually I think its major problem (especially in Crusoe) is that they underestimate the complexity of emulating all x86 behaviors, that they designed a very simple processor almost without any specific hardware for better x86 emulation.

Intel proved that it's a hell of a lot easier to take a P3, reduce frequency and voltage and use your best bins...rather than designing a brand new processor. And when they did design a brand new processor for mobile, that was the end of TMTA. TMTA made a bunch of quite fatal mistakes along the way...but if they had a gravestone it would say 'thanks for the Pentium M'.

Yeah. The problem was not that Intel couldn't make a low power processor, it's just that they didn't do that. Transmeta just let Intel know that it's a good idea to do so, and that didn't do anything good for Transmeta though.
 
didnt the alpha chip translate x86 into its own instruction set and then write it to disk so the next time that program was run it was using alpha instructions ?
 
didnt the alpha chip translate x86 into its own instruction set and then write it to disk so the next time that program was run it was using alpha instructions ?

Not really. DEC released FX!32 quite a few years after Alpha hit the market. At the time I got the impression that it was basically intended to be a tool to ease migration from x86 to Alpha for end-users tied to x86 by legacy binaries. I don't know whether it was an afterthought or intended from the get-go during Alphas development.

Intel tried a similar strategy with Itanium I think, which didn't work out very well either as far as I can tell.
 
Im sure it does (via fx!32)

from this article :
http://www2.computer.org/portal/web/csdl/abs/mags/mi/1998/02/m2056abs.htm
The first time an x86 application is run, all of the application is emulated. Together with transparently running the application, the emulator generates an execution profile that describes the application?s execution history. The profile shows which parts of the application are heavily used (for this user) and which parts are unimportant or rarely used. While the first run may be slow, it "primes the pump" for additional processing. Later, after the application exits, the profile data directs the background optimizer to generate native Alpha code as replacement for all the frequently executed procedures. When the application is next run, the native Alpha code is used and the application executes much faster.
 
That's not a function of the Alpha processor itself, nor of the OS it runs on, nor is it a key requirement for Alpha to execute code. I worked on many an Alpha server/workstation, none of my code ran under FX!32 because I recompiled it. We didn't need x86 compatibility because we didn't need WinNT, we were running OSF/1 or whatever it was called back then. It worked just fine and was VERY fast, thanks. :)

FX!32 was a software emulation/interpretation/translation layer that ran at the OS level or on top of it (it's pretty ambiguous the distinction as far as know how Windows works). It required a native-Alpha WinNT to function, you couldn't run an x86 WinNT OS kernel on an Alpha chip. Once MS killed a native-Alpha WinNT the whole idea died.

Didn't Apple do something similar during their journey from 68000->PPC->x86? The OS runs as native code but non-native apps are translated/emulated. That doesn't imply that the underlying CPUs are capable of and/or rely on such translation/emulation to be able to run the OS kernel. Nor are the OS kernels in question dependent on such translation/emulation being available in order to function.

What I'm pedantising about here is that the way you put your question suggested that such x86 translation was routine/required for Alpha. It wasn't required, and in the end I suspect it wasn't routine either for the bulk of Alpha users (when weighted by gigaflops purchased).
 
no i was making the point that unlike tramsmeta the alpha chip wrote the converted x86 to disk so it didnt have to translate it again
 
Essentially infinite margin on this chipset sure helps your own chipset margins :LOL:

Yes the per unit stuff is very high on gross margins.

If they are doing testing and validation(i really want to know about what exactly they are doing, if they are just generating a key each time or comprehensively testing each motherboard) then they will have the cost of the lab to maintain.

Also not counted as a direct cost is the $$ on the marketing effort needed so consumers start looking and asking for SLI compatibility.

The license terms are thankfully a lot more palatable than they were with the initial X58 launch. To support SLI a motherboard manufacturer simply has to pay NVIDIA $30,000 up front plus $3 per SLI enabled motherboard sold. In turn NVIDIA gives the motherboard manufacturer a key to put in its BIOS that tells the NVIDIA display drivers that it’s ok to enable SLI on that platform.
The Scumbags...

mboard makers should charge nv $3000 for a special key they can put in their bios's that tells the chipset drivers that it’s ok to enable SLI on that platform.

ps: regarding i5 since the pci-e controler is on the cpu who it upto to enable sli cpu or board maker ?

A few pages back i was talking about how the optimum license fee for a licensor to charge is in theory pretty much always too much for the licensee to in turn maximise their profit(and the associated complementary monopoly stuff).

ie at $3 per unit + $30k fixed this maximises nvidia's revenues at the expense of motherboard manufacturer who gets to enjoy lower sales and profit as customers wont on average cover the full amount of the license fees for the motherboard manufacturer.

Anandtech has followed up the previous article, apparently the SLI fee has increased + a hidden extra cost for a more expensive motherboard:
If you start at $40 for the motherboard, you’ll need to add another $10 for a 6-layer PCB. A 6-layer PCB is necessary if you want to run SLI at this point, otherwise you can get by with a cheaper $5 4-layer PCB. Mentioning SLI also requires validation and support from NVIDIA. That’s $30,000 up front plus an average of $5 per motherboard.
From the bottom table in the article can see perfectly outlined the problems for the motherboard manufacturers.
 
Anandtech has followed up the previous article, apparently the SLI fee has increased + a hidden extra cost for a more expensive motherboard.

It's interesting too how the Intel socket license fee is higher still. And that the P55 still costs just as much as a chipset which has an actual northbridge, even though all that functionality is now already paid for in the CPU.

Speaking of scumbags :D
 
It's interesting too how the Intel socket license fee is higher still. And that the P55 still costs just as much as a chipset which has an actual northbridge, even though all that functionality is now already paid for in the CPU.

Speaking of scumbags :D

In the comments Anand says:
Intel's got the master list and doesn't like sharing it publicly. The prices generally break down like this:

Mainstream chipsets: $30 - $40 for NB

South Bridge: $3 - $7 (although I've seen south bridges as high as $14, but it looks like they are a lot cheaper these days).

The older "value" chipsets will be priced below $30 but they generally suck in one way or another :)
The other comments say Anand has come in a bit high in some of the component costs, for the cheaper boards they would expect the total cost to come in round $80 or so.

If he has overcooked it on the various components this only makes the licensing situation worse.

Note above how cheap the the southbridge is compared to the northbridge...as i said before this component is held hostage by all the license fees from all the different interconnects and third party IP. So as a strategy intel sell this at near cost to deny the third parties as much income as possible and add extra margin onto their northbridges and CPUs which are relatively license free to make up the difference.

Looking at what nvidia is doing at $5 per unit they are really playing brinkmanship with the motherboard manufacturers. This would be fine if they were a pure licensing operation it would maximise their profit. But they are not, this is really just a side game to their GPU and in that sense this move really makes no sense. To maximise GPU income, they should be instead be trying something like subsidising the manufacturers say 50c or so on each motherboard to include dual PCIe slots(rather than cheap out by just putting one) and pretty much offer the validation for free in return for the prominent placement of the SLI logo on the board and box and associated advertising of the motherboard.
 
Last edited by a moderator:
That depends on where the value add for the customer is coming from. Do they buy a motherboard based on SLI support or do they buy a second graphics card cause their motherboard supports it? Obviously these guys wouldn't pay for the privilege if it wasn't worth it.
 
Looking at what nvidia is doing at $5 per unit they are really playing brinkmanship with the motherboard manufacturers. This would be fine if they were a pure licensing operation it would maximise their profit. But they are not, this is really just a side game to their GPU and in that sense this move really makes no sense.

From that perspective the chipset business is merely a side game to Intel's CPUs as well.

Yet Intel charges a fee per socket, asks considerable sums for their chipsets, and then they don't even allow anyone else to make chipsets for the QPI architecture.

How is this different from Nvidia licensing SLI out for a fee? Heck, the GPU has arguably been a more important component than the CPU for some time already - at least for gamers, a top of the line GPU with a budget CPU generally nets better results than a top of the line CPU with a budget GPU, and it's no different for computationally intensive jobs like folding. And if anything this trend is set to continue, as GPUs become more general purpose devices, or in fact become part of the CPUs in the future.
 
Yeah but you don't need a SLI motherboard to plug a NVidia GPU whereas you do need an Intel socket MB to plug-in an Intel CPU ;) And along that path the people using SLI are at least in my opinion a minority (and a relatively small one at that) so it should be NVidia's incentive to stimulate SLI adoption since as it stands now the majority of the people won't be needing the SLI feature let alone shell out a price premium on it to compensate for the SLI fee. BUT if it did come as a feature of the board for the same price then someone with a GeForce could add a second card some time in the future therefore resulting in a buy for NV.
 
Status
Not open for further replies.
Back
Top