NVIDIA shows signs ... [2008 - 2017]

Discussion in 'Graphics and Semiconductor Industry' started by Geo, Jul 2, 2008.

Tags:
Thread Status:
Not open for further replies.
  1. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
  2. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,890
    Likes Received:
    344
    Location:
    PA USA
    If you want to call him an IT shock jock then fine I grant that the way he goes about things makes sense, but as with other shock jocks I am not interested in it. It is too bad really b/c the bump stuff was fairly good information to get out, though there were some problems with it as well.
     
  3. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    These side deals have been done since day 1. You just don't know about them because they tend to be done over a handshake in a small room with 4 or 5 people in it. The netbook thing was done for a reason, you just have to ask what was traded for it.

    I have no knowledge of this, but try and think about it this way. MS caved into forcing Intel's view on netbooks just when Intel is launching CULV. You could view this as a way for MS to bump ASPs with higher versions of 7, but I think it is more what Intel wanted.

    So, what was traded? Moblin handoff perhaps? Decreased investment in Linux? Sinking or delaying of key drivers?

    You can't know unless you have someone in the room. I have heard about several negotiations like this from people in the room, and they generally make your jaw drop. That said, you can never figure it out from the outside.

    -Charlie
     
  4. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    well we have 2 other people that used to work at Inq say the complete opposite of what Charlie just said about the gt300 :smile:, but maybe its the Inq itself that has this "vendetta" since if we look back Faud was like Charlie when he was there, don't remember if Theo was?
     
  5. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    I think Intel will cave totally on this, apologize, and set prices basically where they should be without any hint of bundling.

    This will be 3 days before Pineview ships in volume. :) Game over, Intel wins because they are much smarter than Nvidia. Nvidia is being played like a fiddle, and JHH is sinking the company by playing along. If it gets to court, how well do you think the sound clips of JHH's greatest hits will play out? The "War" memo perhaps? They handed Intel a "Do whatever you want" card, and Intel used it.

    NV sunk their own boat here.

    -Charlie
     
  6. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Hehe, I like this scenario. One catch though: does that mean the current Atom would get killed instantly? And if not, what makes you think Pineview will automatically win in higher-end sockets against Atom+Ion, or heck Core2+Ion2/Nano+Ion2? And heck, last I heard Pineview was still a MCM...

    I still suspect that the netbook/nettop/Ion/Tegra/ARM/stuff endgame is x86 getting badly commoditized and Intel getting hurt as a result of it, TBH. But that's just a hunch, and I could be wrong.
     
  7. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    Yes, but one of us has the specs of the cards. Also, one of us realizes that NV is at a power wall, reticle size wall, and is moving in the wrong direction (generalization) for graphics performance. One of us actually gets the science behind the chips, and has a background of chemistry, chemical engineering, physics and CSCI (plus a lot of biology and genetics). That said, Fudo is really good at what he does, I won't comment on Theo.

    When the specs for both cards come out, you will see. If you think about it, NV at 500mm^2 has about the biggest card you can reasonably make and sell profitably in the price bands they are aiming at.

    With a shrink, they will have about 2x the transistor count, so about 2x the shaders. This means optimally, 2x the performance plus whatever efficiencies they can squeak out. Lets say 2.5x performance.

    Take some out for inefficient use of area to support GPGPU, and then a bit more to support DX11, and lets just call it back at 2x performance for a 500mm^2 die.

    Then you are staring down a power wall. If 40nm saves you 25% power, you can, very simplistically speaking, add 25% more transistors OR bump clock by a bit, but not both. If you double transistor count, you are looking at significantly lowering clock or getting into asbestos mining.

    If NV doubles the transistor count and only keeps the clock the same, they are in deep trouble. I think 2x performance will be _VERY_ hard to hit, very hard. The ways to up that are mostly closed to them, and architecturally, the aimed wrong.

    ATI on the other hand can effectively add in 4x the transistors should they need, but 2x is more than enough to keep pace, so they will be about 250mm^2 for double performance. Power is more problematic, but if you need to throw transistors at it to control power/leakage better, ATI can do so much more readily than NV.

    ATI's power budget takes GDDR5 into account, NV's doesn't, so another black mark for NV. How much do you think the rumored 512b GDDR5 will consume?

    The next gen is going to be a clean kill for ATI, but Nvidia will kick ass in the "convert video it widget" benchmarks. That is something they can be proud of, it uses physics, cuda, and pixie dust. Hell, it probably sells tens, maybe hundreds of GPUs.

    Q3/Q4 and likely Q1 are going to be very tough for NV.

    Then again, I said that a while ago.
    http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture

    -Charlie
     
  8. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    Intel historically prices things to make the new one attractive. If Pineview costs as much as Atom, and you don't have to buy a chipset........ Then when faster Pineviews come out, they up the price on them, game over.

    -Charlie
     
  9. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    It's pretty clear that he's elaborated far more in his posts here than he has in his flamboyant Inq articles.

    Lol, so Intel's bundle pricing for Atom+chipset makes sense to you from any other standpoint than simply deterring Atom only sales? I'd like to see someone make that argument.

    Nvidia has obviously been pretty shady during this entire episode but people dismiss Inq articles as fluff pieces simply because the delivery is full of bias, emotion and general bitterness. So the message that comes across is that the author hates Nvidia for whatever personal reasons and not that he's trying to educate or protect the innocents.

    Is the "mainstream" press giving Nvidia a free pass on this? And Charlie is simply upset that people aren't aware of the magnitude and gravity of Nvidia's underhandedness? He obviously has his connections in the industry but he's just one guy with an obvious agenda. It's not easy to figure out where all the pieces fall simply based on Inquirer articles. But I imagine some folks here have more info than others and have more reason to be outraged (hearkening back to Shillgate :lol:).

    To be honest, I'm still not sure whether Charlie is angry that Nvidia is getting away with murder or if he's gloating that they are about to get their comeuppance. Either way, it's probably inconsequential.
     
  10. Sound_Card

    Regular

    Joined:
    Nov 24, 2006
    Messages:
    936
    Likes Received:
    4
    Location:
    San Antonio, TX
    Ironic what you put in your sig, are you going to play another dangerous game like this again? Was not Charlie more right about GT200/RV770 then Theo, Fudo.... and you?:???:
    I honestly don't know if you were preaching about GT200 and RV770 here, but you certainly were at other forums.
     
  11. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Theo's site is practically paid for by nV (Palit challenge, headlines as "Nvidia’s $50 card destroys ATI’s $500 one") I wouldn't expect anything but green news from BSonVnews. He's getting fed a lot of bogus stuff regarding ATI (imminent launch of the Radeon 5600 back in January).

    Fudo is much the same, two weeks before the 4890 launched he wrote a piece about how the 4890 didn't have higher clock speeds but different shaders etc. But making 3 posts by different people on things that are true it drowns out their bug rumour.
    I do like fudzilla though, it has a lot of other news etc. but a lot of it feels like captioning other news blurbs.
     
    #811 neliz, May 27, 2009
    Last edited by a moderator: May 27, 2009
  12. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    specs aren't everything, if the architecture has changed, look at the rv670 to the rv770, if we look at just the specs, well they would seem like they would still have similiar performance ratios of the g92 to the gt200, but that didn't happen with the rv770.

    If you think nV hasn't learned anything form what happened to the gt200, I think that is a bit shallow. I agree about the size wall but in all honesty the watts per mm is still better then what we see on the rv770, idle and load. For chips that are much larger then competing parts with performance above still have similiar power envelopes.

    40 nm saving 25% power doesn't automatically translate to 25% more transistors either or vice versa, its all about the engineering of the part. Again the 512bit bus doesn't have anything to do with power consumption, although in general more die does increase power usage in general, nV has found ways around this in the gt200. Even if they use GDDR3 memory (which probably unlikely for gt300), they still have a power advantage.

    If this is what you are basing your stories on, I suggest you talk to some of the engineers here because what you're saying is not correct. You are basing it on conjecture and not actuality.
     
    #812 Razor1, May 27, 2009
    Last edited by a moderator: May 27, 2009
  13. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    Which one would that be? :razz: That's a pretty bold claim to be making. Implying that you're smarter than Nvidia's entire engineering team. Bravo.

    Ah so Nvidia's move towards generalization is in the wrong direction, but Larrabee's generalization is just peachy. But of course, Larrabee's generalization is the good kind! :lol:

    Charlie, it's hard to take you seriously when all you preach is doom and gloom for Nvidia and blue skies for AMD. Your "technical" analysis is cursory and simply projects the worst possible outcome for anything Nvidia is doing.
     
  14. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    That is true, but I was more interested in the way the posted at Inq vs. now. It was all doom and gloom before, but now they are much neutral in thier approach.
     
  15. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Maybe all this involves a Gypsy fortune teller? (I see green, an N and bankrupcy!)
     
  16. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
  17. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    So where are we at now with predictions of Nvidia's collapse? That OEM's are going to sue for compensation for faulty chips all the way back to NV4x and the resulting fines/payments will wipe out all of Nvidia's cash and other assets? Maybe they can get in touch with AMD's creditors, those guys are suckers for a good deal.

    The OEM's can't be too pleased with the situation or Nvidia's handling of it. But do we have any reliable source for Nvidia's exposure should this escalate? I can swear I saw a $1000/unit estimate somewhere....
     
  18. Groo The Wanderer

    Regular

    Joined:
    Jan 23, 2007
    Messages:
    334
    Likes Received:
    2
    Not the sharpest bowling ball of the bunch, are you? 512b bus takes more power than a 256. GDDR5 takes more power than GDDR5 for similar bit widths.

    As for transistor -> power, you are right, but as a general rule, it is a good starting point. In very parallel architectures, it tends to work out fairly well as an estimator. Less so for monolithic cores.

    Are you suggesting that adding transistors linearly will not increase power fairly linearly? Are you suggesting that for similar bit widths, GDDR5 does not take more power than GDDR3? Look at the numbers for the 4850 vs 4870, they are different how? OC or downclock them to the same frequency, and the difference is what again?

    -Charlie
     
  19. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    when did I say that? Its not purely linear when you are talking about the full die, because the entire die isn't the damn bus, or are you suggesting it is? Last time I looked the bus only took up around 10% of the die on the gt200, now not just that you don't need to talk to someone you are generalizing when you don't need to. In very parallel actchitectures it isn't, because the die of the gt200 doesn't have the same frequency in all regions, nor does the AMD's cores but to a lesser degree.

    What are you suggesting "forget the clocking of those transistors, the amount of transistors, effeciency of the parts in question"?
     
    #819 Razor1, May 27, 2009
    Last edited by a moderator: May 27, 2009
  20. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    What takes the most power: 1x512 or (2x256+Bridge)? I think that should be pretty obvious.

    Anyway Charlie, I hope you'll do more of a mea cupla if you're wrong here than you did with G80 vs R600. If you knew about what they're doing at the back-end, what generalization sweet-spot they're aiming at, and how they'll scale the design for derivatives this might be different. But you don't, so you really shouldn't pretend to understand the dynamics of the DX11 gen even if you did have some info.

    Regarding power when adding transistors: it's a bit more complex than that in reality. Here's the truth: assuming leakage isn't too absurdly high (maybe not the best starting point for 40nm GPUs!), then more parallelism via more transistors usually means *lower power* for a *given performance target*. One word: voltage. Here's a pretty graph that brings the point home: http://dl.getdropbox.com/u/232602/Power.png - this is obviously not the main goal of adding transistors in GPUs, but in the ultra-high-end with the risk of thermal limitation this factor needs to be seriously considered along with a few other things.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...