NVidia's Dispute over Nehalem Licensing - SLI not involved

Discussion in 'Graphics and Semiconductor Industry' started by Jawed, Jun 1, 2008.

  1. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    So, what's on the table?:

    http://news.cnet.com/8301-10784_3-9956256-7.html

    Where's the popcorn?

    Jawed
     
  2. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,809
    Likes Received:
    2,223
    "We are not seeking any SLI concession from Nvidia in exchange for granting any Nehalem license rights to Nvidia,"

    Well they should
     
  3. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    I think this is the most interesting part:
    At Analyst Day 2008, Jen-Hsun clearly seemed to think they were covered for every single Intel socket as long as their licensing agreement remained in effect, and here it looks like Intel doesn't think so - even though NVIDIA wouldn't have much of anything to bargain for to get new sockets (except SLI presumably) if they aren't already covered.

    There's another massive implication here: at Analyst Day, Jen-Hsun basically implied that if Intel wanted to get rid of that licensing agreement, he'd be more than willing to - presumably implying that he believes Larrabee & Intel IGPs are dependent on it and that if Intel wants to abandon those, he'll gladly stop caring about the relatively much lower amount of money in the chipset market.

    So NVIDIA doesn't even *want* this license agreement to remain in effect, and now it looks like Intel is claiming that it really doesn't give NVIDIA what they think it does. Either this gets resolved quickly, or this risks turning into a ridiculously massive legal battle with the core businesses of both companies at stake. Fun!
     
  4. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    If anything, denying Nvidia a Quickpath license (and Nvidia's Intel-based chipset marketshare is not even that big of a deal right now anyway) would serve only to cause it to reintegrate some of their engineering strength back into GPU's, something that even Intel wouldn't be so happy about.
    Larrabee's competition would suddenly be potentially stronger.
     
  5. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Uhhh, with all due respect to NVIDIA's chipset team, I don't think them contributing to NV's DX11 GPUs would make them anything but buggier and substantially delayed :p
     
  6. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    So there is maybe more on the asian rumor, that NV will remove display-outputs on GeForce in Q2 2009, to make them only usable on nForce or AMD chipset with display-outputs on mainboard, to force Intel to offer them a license to support leading GPU-technologie with Intel CPUs. :lol:
     
  7. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Well, getting rid of them would clear quite a few expenses (remnants from ULi in Taiwan, the Indian R&D facility, and their chipset teams back home) .
    The money saved *could* be re-directed to hiring new blood and general GPU R&D, no ?
     
  8. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    Is it possible for them to corrupt our pixels like they corrupt our data too? Ooh! What if they raise TDPs even further so we can finally cook meals on our computers while we game!
     
  9. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    Great idea. Let's introduce more latency into our rendering pipeline. That'll make gamers happy, especially in the era of multiple GPUs ;)
     
  10. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    I'm not sure "latency" is the correct term. At least in a sense of inter-frame delay ((see MFA? I said it :p))((no disrespect I get slammed for using incorrect terminology too)) but rather CPU overhead. There is no additional latency to using the IGP to do final render output with Hybrid Power. Just a additional CPU overhead. ((Though minor)).

    Chris
     
    #10 ChrisRay, Jun 4, 2008
    Last edited by a moderator: Jun 4, 2008
  11. stevem

    Regular

    Joined:
    Feb 11, 2002
    Messages:
    632
    Likes Received:
    3
    Is that due to PCIe traffic or buffer copy?
     
  12. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    PCIE traffic I believe. ((but not 100% certain)) Nvidia uses the SM BUS for this. But we're talking very negligable amounts. Nvidia says up to 5% but realistically its like 1% or even less.

    Chris
     
  13. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,809
    Likes Received:
    2,223
    so nvidia are lying - no i dont beleive you ;)
     
  14. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Just overly conservative as usual. :p
     
  15. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    latency == delay (measured in units of time marked by a beginning and an end point)

    it is by all means correct usage of the term

    also, just because you don't perceive it doesn't mean it isn't there ;)

    Latency in this case may very well be insignificant on a per-frame basis, but averaged across frames and taken as a whole across a second, it's effects become more statistically meaningful. Also, when combined with other latency-inducing factors it just adds up. More latency is never a good thing in RT computing, last time I checked.
     
  16. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,838
    Likes Received:
    302
    Location:
    PA USA
    I think Nvidia may find themselves in hot water if they press Intel too far. AMD could squeeze them from the other side as well. At the same time I suppose giving their SLi to Intel would lower their margins, but it seems they already are finding themselves in a tough spot despite their recent success.
     
  17. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Multi GPU latency and this overhead of doing this are completely irrelevant to each other. It has nothing to do with perception. There isn't an increase inter-frame delay or additional input latency by using the IGP to do final render output. At the very best there is an additional CPU/System overhead which effects largely CPU limited scenerios and reduces framerates a smidgen. The only thing it really shares with SLI is the additional driver overhead that occurs. Much like SLI in CPU limited scenerios.
     
    #17 ChrisRay, Jun 4, 2008
    Last edited by a moderator: Jun 4, 2008
  18. stevem

    Regular

    Joined:
    Feb 11, 2002
    Messages:
    632
    Likes Received:
    3
    I still don't like it. Not enough available tech detail, I guess. Do we know the bandwith between the GPU & NVIO? I'd much prefer if the display was connected to the dGPU, with low power mode routing the opposite way from the mGPU under low load. This way running 3D apps under load doesn't compromise performance. This may not gel with their notions of headless dGPUs & output only via their own chipset mGPUs. Perhaps this is what Intel won't allow - somebody creating a lock-in on "their" platform...

    This has implications for AMD, too. If Nvidia goes headless for dGPUs, they can play the same game with AMD & lock consumers into their chipsets on the AMD platform, too. If they can't get past Intel, they'll have to produce two SKUs, negating the full benefit of that strategy...
     
    #18 stevem, Jun 5, 2008
    Last edited by a moderator: Jun 5, 2008
  19. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    http://www.fudzilla.com/index.php?option=com_content&task=view&id=7713&Itemid=1

    Funny, that, NVidia talking on the record to Fudzilla. PR, funny old game.

    Jawed
     
  20. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    You misunderstand. All I meant by the multiple GPU comment was that the more cards you add into the mix, the worse your worst case scenario for longest time to frame completion becomes. Again, adding latency in RT computing is just not a good idea if it can be avoided (and the current implementation with native display ports on the graphics card certainly avoids this).

    Chris, unless Nvidia has somehow figured out how to transfer data instantaneously, there IS a delay in sending the contents of the frame buffer to the IGP and out through the display port (whichever it may be).

    As for
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...