NVidia's Dispute over Nehalem Licensing - SLI not involved

Jawed

Legend
So, what's on the table?:

http://news.cnet.com/8301-10784_3-9956256-7.html

[...]Intel has since more-or-less confirmed that licensing discussions between Intel and Nvidia for Intel's next-generation processors are not going well and the resulting conflict could have implications for high-end gaming PCs.

Intel released an additional statement after this blog was posted. "We are not seeking any SLI concession from Nvidia in exchange for granting any Nehalem license rights to Nvidia," the company said.
Where's the popcorn?

Jawed
 
"We are not seeking any SLI concession from Nvidia in exchange for granting any Nehalem license rights to Nvidia,"

Well they should
 
I think this is the most interesting part:
"There is a disagreement between Intel and Nvidia as to the scope of Nvidia's license from Intel to make chipsets compatible with Intel microprocessors. Intel is trying to resolve the disagreement privately with Nvidia and therefore we will not provide additional details. It is our hope that this dispute will be resolved amicably and that it will not impact other areas of our companies' working relationship."
At Analyst Day 2008, Jen-Hsun clearly seemed to think they were covered for every single Intel socket as long as their licensing agreement remained in effect, and here it looks like Intel doesn't think so - even though NVIDIA wouldn't have much of anything to bargain for to get new sockets (except SLI presumably) if they aren't already covered.

There's another massive implication here: at Analyst Day, Jen-Hsun basically implied that if Intel wanted to get rid of that licensing agreement, he'd be more than willing to - presumably implying that he believes Larrabee & Intel IGPs are dependent on it and that if Intel wants to abandon those, he'll gladly stop caring about the relatively much lower amount of money in the chipset market.

So NVIDIA doesn't even *want* this license agreement to remain in effect, and now it looks like Intel is claiming that it really doesn't give NVIDIA what they think it does. Either this gets resolved quickly, or this risks turning into a ridiculously massive legal battle with the core businesses of both companies at stake. Fun!
 
If anything, denying Nvidia a Quickpath license (and Nvidia's Intel-based chipset marketshare is not even that big of a deal right now anyway) would serve only to cause it to reintegrate some of their engineering strength back into GPU's, something that even Intel wouldn't be so happy about.
Larrabee's competition would suddenly be potentially stronger.
 
Uhhh, with all due respect to NVIDIA's chipset team, I don't think them contributing to NV's DX11 GPUs would make them anything but buggier and substantially delayed :p
 
So there is maybe more on the asian rumor, that NV will remove display-outputs on GeForce in Q2 2009, to make them only usable on nForce or AMD chipset with display-outputs on mainboard, to force Intel to offer them a license to support leading GPU-technologie with Intel CPUs. :LOL:
 
Uhhh, with all due respect to NVIDIA's chipset team, I don't think them contributing to NV's DX11 GPUs would make them anything but buggier and substantially delayed :p

Well, getting rid of them would clear quite a few expenses (remnants from ULi in Taiwan, the Indian R&D facility, and their chipset teams back home) .
The money saved *could* be re-directed to hiring new blood and general GPU R&D, no ?
 
Uhhh, with all due respect to NVIDIA's chipset team, I don't think them contributing to NV's DX11 GPUs would make them anything but buggier and substantially delayed :p

Is it possible for them to corrupt our pixels like they corrupt our data too? Ooh! What if they raise TDPs even further so we can finally cook meals on our computers while we game!
 
So there is maybe more on the asian rumor, that NV will remove display-outputs on GeForce in Q2 2009, to make them only usable on nForce or AMD chipset with display-outputs on mainboard, to force Intel to offer them a license to support leading GPU-technologie with Intel CPUs. :LOL:

Great idea. Let's introduce more latency into our rendering pipeline. That'll make gamers happy, especially in the era of multiple GPUs ;)
 
I'm not sure "latency" is the correct term. At least in a sense of inter-frame delay ((see MFA? I said it :p))((no disrespect I get slammed for using incorrect terminology too)) but rather CPU overhead. There is no additional latency to using the IGP to do final render output with Hybrid Power. Just a additional CPU overhead. ((Though minor)).

Chris
 
Last edited by a moderator:
Is that due to PCIe traffic or buffer copy?

PCIE traffic I believe. ((but not 100% certain)) Nvidia uses the SM BUS for this. But we're talking very negligable amounts. Nvidia says up to 5% but realistically its like 1% or even less.

Chris
 
I'm not sure "latency" is the correct term. At least in a sense of inter-frame delay ((see MFA? I said it :p))((no disrespect I get slammed for using incorrect terminology too)) but rather CPU overhead. There is no additional latency to using the IGP to do final render output with Hybrid Power. Just a additional CPU overhead. ((Though minor)).

Chris

latency == delay (measured in units of time marked by a beginning and an end point)

it is by all means correct usage of the term

also, just because you don't perceive it doesn't mean it isn't there ;)

Latency in this case may very well be insignificant on a per-frame basis, but averaged across frames and taken as a whole across a second, it's effects become more statistically meaningful. Also, when combined with other latency-inducing factors it just adds up. More latency is never a good thing in RT computing, last time I checked.
 
I think Nvidia may find themselves in hot water if they press Intel too far. AMD could squeeze them from the other side as well. At the same time I suppose giving their SLi to Intel would lower their margins, but it seems they already are finding themselves in a tough spot despite their recent success.
 
Multi GPU latency and this overhead of doing this are completely irrelevant to each other. It has nothing to do with perception. There isn't an increase inter-frame delay or additional input latency by using the IGP to do final render output. At the very best there is an additional CPU/System overhead which effects largely CPU limited scenerios and reduces framerates a smidgen. The only thing it really shares with SLI is the additional driver overhead that occurs. Much like SLI in CPU limited scenerios.
 
Last edited by a moderator:
...by using the IGP to do final render output.
I still don't like it. Not enough available tech detail, I guess. Do we know the bandwith between the GPU & NVIO? I'd much prefer if the display was connected to the dGPU, with low power mode routing the opposite way from the mGPU under low load. This way running 3D apps under load doesn't compromise performance. This may not gel with their notions of headless dGPUs & output only via their own chipset mGPUs. Perhaps this is what Intel won't allow - somebody creating a lock-in on "their" platform...

This has implications for AMD, too. If Nvidia goes headless for dGPUs, they can play the same game with AMD & lock consumers into their chipsets on the AMD platform, too. If they can't get past Intel, they'll have to produce two SKUs, negating the full benefit of that strategy...
 
Last edited by a moderator:
http://www.fudzilla.com/index.php?option=com_content&task=view&id=7713&Itemid=1

Nvidia's Director or PR, Derek Perez, has told Fudzilla that Intel actually won't let Nvidia make its Nforce chipset that will work with Intel's Nehalem generation of processors.

We confirmed this from Intel’s side, as well as other sources. Intel told us that there won't be an Nvidia's chipset for Nehalem. Nvidia will call this a "dispute between companies that they are trying to solve privately," but we believe it's much more than that.
Funny, that, NVidia talking on the record to Fudzilla. PR, funny old game.

Jawed
 
Multi GPU latency and this overhead of doing this are completely irrelevant to each other.

You misunderstand. All I meant by the multiple GPU comment was that the more cards you add into the mix, the worse your worst case scenario for longest time to frame completion becomes. Again, adding latency in RT computing is just not a good idea if it can be avoided (and the current implementation with native display ports on the graphics card certainly avoids this).

It has nothing to do with perception. There isn't an increase inter-frame delay or additional input latency by using the IGP to do final render output. At the very best there is an additional CPU/System overhead which effects largely CPU limited scenerios and reduces framerates a smidgen. The only thing it really shares with SLI is the additional driver overhead that occurs. Much like SLI in CPU limited scenerios.

Chris, unless Nvidia has somehow figured out how to transfer data instantaneously, there IS a delay in sending the contents of the frame buffer to the IGP and out through the display port (whichever it may be).

As for
 
Back
Top