NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
Charlie sells clicks, much like a Howard Stern. IT's Shock-Jock. Don't call him a journalist then, call him a reporter. If nVidia wanted to get rid of him they'd sue him for libel or defamation. Once side says he's not worth it, the other side says nVidia couldn't win. He'll get his stuff out there anyway and his stories are far more entertaining than digitimes.

If you want to call him an IT shock jock then fine I grant that the way he goes about things makes sense, but as with other shock jocks I am not interested in it. It is too bad really b/c the bump stuff was fairly good information to get out, though there were some problems with it as well.
 
If I were to suggest something new for Charlie to sink his teeth in, it would be how Intel and Microsoft are "negotiating" with the industry to prevent ARM/linux netbooks from being produced.

These side deals have been done since day 1. You just don't know about them because they tend to be done over a handshake in a small room with 4 or 5 people in it. The netbook thing was done for a reason, you just have to ask what was traded for it.

I have no knowledge of this, but try and think about it this way. MS caved into forcing Intel's view on netbooks just when Intel is launching CULV. You could view this as a way for MS to bump ASPs with higher versions of 7, but I think it is more what Intel wanted.

So, what was traded? Moblin handoff perhaps? Decreased investment in Linux? Sinking or delaying of key drivers?

You can't know unless you have someone in the room. I have heard about several negotiations like this from people in the room, and they generally make your jaw drop. That said, you can never figure it out from the outside.

-Charlie
 
well we have 2 other people that used to work at Inq say the complete opposite of what Charlie just said about the gt300 :smile:, but maybe its the Inq itself that has this "vendetta" since if we look back Faud was like Charlie when he was there, don't remember if Theo was?
 
So what were NVidia's tears over the price of Atom from Intel all about, with Jen practically crying in public whenever the subject has come up this last ~5 months? Do you think that was anything but emotional posturing? Good marketing to paint yourself as the underdog but fighting for consumer's rights and choice.

I think Intel will cave totally on this, apologize, and set prices basically where they should be without any hint of bundling.

This will be 3 days before Pineview ships in volume. :) Game over, Intel wins because they are much smarter than Nvidia. Nvidia is being played like a fiddle, and JHH is sinking the company by playing along. If it gets to court, how well do you think the sound clips of JHH's greatest hits will play out? The "War" memo perhaps? They handed Intel a "Do whatever you want" card, and Intel used it.

NV sunk their own boat here.

-Charlie
 
I think Intel will cave totally on this, apologize, and set prices basically where they should be without any hint of bundling.

This will be 3 days before Pineview ships in volume. :) Game over, Intel wins because they are much smarter than Nvidia.
Hehe, I like this scenario. One catch though: does that mean the current Atom would get killed instantly? And if not, what makes you think Pineview will automatically win in higher-end sockets against Atom+Ion, or heck Core2+Ion2/Nano+Ion2? And heck, last I heard Pineview was still a MCM...

I still suspect that the netbook/nettop/Ion/Tegra/ARM/stuff endgame is x86 getting badly commoditized and Intel getting hurt as a result of it, TBH. But that's just a hunch, and I could be wrong.
 
well we have 2 other people that used to work at Inq say the complete opposite of what Charlie just said about the gt300 :smile:, but maybe its the Inq itself that has this "vendetta" since if we look back Faud was like Charlie when he was there, don't remember if Theo was?

Yes, but one of us has the specs of the cards. Also, one of us realizes that NV is at a power wall, reticle size wall, and is moving in the wrong direction (generalization) for graphics performance. One of us actually gets the science behind the chips, and has a background of chemistry, chemical engineering, physics and CSCI (plus a lot of biology and genetics). That said, Fudo is really good at what he does, I won't comment on Theo.

When the specs for both cards come out, you will see. If you think about it, NV at 500mm^2 has about the biggest card you can reasonably make and sell profitably in the price bands they are aiming at.

With a shrink, they will have about 2x the transistor count, so about 2x the shaders. This means optimally, 2x the performance plus whatever efficiencies they can squeak out. Lets say 2.5x performance.

Take some out for inefficient use of area to support GPGPU, and then a bit more to support DX11, and lets just call it back at 2x performance for a 500mm^2 die.

Then you are staring down a power wall. If 40nm saves you 25% power, you can, very simplistically speaking, add 25% more transistors OR bump clock by a bit, but not both. If you double transistor count, you are looking at significantly lowering clock or getting into asbestos mining.

If NV doubles the transistor count and only keeps the clock the same, they are in deep trouble. I think 2x performance will be _VERY_ hard to hit, very hard. The ways to up that are mostly closed to them, and architecturally, the aimed wrong.

ATI on the other hand can effectively add in 4x the transistors should they need, but 2x is more than enough to keep pace, so they will be about 250mm^2 for double performance. Power is more problematic, but if you need to throw transistors at it to control power/leakage better, ATI can do so much more readily than NV.

ATI's power budget takes GDDR5 into account, NV's doesn't, so another black mark for NV. How much do you think the rumored 512b GDDR5 will consume?

The next gen is going to be a clean kill for ATI, but Nvidia will kick ass in the "convert video it widget" benchmarks. That is something they can be proud of, it uses physics, cuda, and pixie dust. Hell, it probably sells tens, maybe hundreds of GPUs.

Q3/Q4 and likely Q1 are going to be very tough for NV.

Then again, I said that a while ago.
http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture

-Charlie
 
Hehe, I like this scenario. One catch though: does that mean the current Atom would get killed instantly? And if not, what makes you think Pineview will automatically win in higher-end sockets against Atom+Ion, or heck Core2+Ion2/Nano+Ion2? And heck, last I heard Pineview was still a MCM...

I still suspect that the netbook endgame is x86 getting badly commoditized and Intel getting hurt as a result of it, TBH. But that's just a hunch, and I could be wrong.

Intel historically prices things to make the new one attractive. If Pineview costs as much as Atom, and you don't have to buy a chipset........ Then when faster Pineviews come out, they up the price on them, game over.

-Charlie
 
Yet you try to suggest that what he's posted in this thread on that subject represents information that isn't in his articles.

It's pretty clear that he's elaborated far more in his posts here than he has in his flamboyant Inq articles.

What is JHH's point? Have you noticed how he doesn't actually have one?

Lol, so Intel's bundle pricing for Atom+chipset makes sense to you from any other standpoint than simply deterring Atom only sales? I'd like to see someone make that argument.

The signs are NVidia knew these products were dicing with death because the materials science says so. It's fucking obvious from the materials science, in fact. That's Charlie's point. That means NVidia knowingly sold faulty products hoping that they'd fall out of warranty before the shit hit the fan. The only companies that'd give a toss about that would be those insurance companies that sell extended warranties.

Nvidia has obviously been pretty shady during this entire episode but people dismiss Inq articles as fluff pieces simply because the delivery is full of bias, emotion and general bitterness. So the message that comes across is that the author hates Nvidia for whatever personal reasons and not that he's trying to educate or protect the innocents.

Is the "mainstream" press giving Nvidia a free pass on this? And Charlie is simply upset that people aren't aware of the magnitude and gravity of Nvidia's underhandedness? He obviously has his connections in the industry but he's just one guy with an obvious agenda. It's not easy to figure out where all the pieces fall simply based on Inquirer articles. But I imagine some folks here have more info than others and have more reason to be outraged (hearkening back to Shillgate :LOL:).

To be honest, I'm still not sure whether Charlie is angry that Nvidia is getting away with murder or if he's gloating that they are about to get their comeuppance. Either way, it's probably inconsequential.
 
well we have 2 other people that used to work at Inq say the complete opposite of what Charlie just said about the gt300 :smile:, but maybe its the Inq itself that has this "vendetta" since if we look back Faud was like Charlie when he was there, don't remember if Theo was?

Ironic what you put in your sig, are you going to play another dangerous game like this again? Was not Charlie more right about GT200/RV770 then Theo, Fudo.... and you?:???:
I honestly don't know if you were preaching about GT200 and RV770 here, but you certainly were at other forums.
 
well we have 2 other people that used to work at Inq say the complete opposite of what Charlie just said about the gt300 :smile:, but maybe its the Inq itself that has this "vendetta" since if we look back Faud was like Charlie when he was there, don't remember if Theo was?

Theo's site is practically paid for by nV (Palit challenge, headlines as "Nvidia’s $50 card destroys ATI’s $500 one") I wouldn't expect anything but green news from BSonVnews. He's getting fed a lot of bogus stuff regarding ATI (imminent launch of the Radeon 5600 back in January).

Fudo is much the same, two weeks before the 4890 launched he wrote a piece about how the 4890 didn't have higher clock speeds but different shaders etc. But making 3 posts by different people on things that are true it drowns out their bug rumour.
I do like fudzilla though, it has a lot of other news etc. but a lot of it feels like captioning other news blurbs.
 
Last edited by a moderator:
Yes, but one of us has the specs of the cards. Also, one of us realizes that NV is at a power wall, reticle size wall, and is moving in the wrong direction (generalization) for graphics performance. One of us actually gets the science behind the chips, and has a background of chemistry, chemical engineering, physics and CSCI (plus a lot of biology and genetics). That said, Fudo is really good at what he does, I won't comment on Theo.

When the specs for both cards come out, you will see. If you think about it, NV at 500mm^2 has about the biggest card you can reasonably make and sell profitably in the price bands they are aiming at.

With a shrink, they will have about 2x the transistor count, so about 2x the shaders. This means optimally, 2x the performance plus whatever efficiencies they can squeak out. Lets say 2.5x performance.

Take some out for inefficient use of area to support GPGPU, and then a bit more to support DX11, and lets just call it back at 2x performance for a 500mm^2 die.

Then you are staring down a power wall. If 40nm saves you 25% power, you can, very simplistically speaking, add 25% more transistors OR bump clock by a bit, but not both. If you double transistor count, you are looking at significantly lowering clock or getting into asbestos mining.

If NV doubles the transistor count and only keeps the clock the same, they are in deep trouble. I think 2x performance will be _VERY_ hard to hit, very hard. The ways to up that are mostly closed to them, and architecturally, the aimed wrong.

ATI on the other hand can effectively add in 4x the transistors should they need, but 2x is more than enough to keep pace, so they will be about 250mm^2 for double performance. Power is more problematic, but if you need to throw transistors at it to control power/leakage better, ATI can do so much more readily than NV.

ATI's power budget takes GDDR5 into account, NV's doesn't, so another black mark for NV. How much do you think the rumored 512b GDDR5 will consume?

The next gen is going to be a clean kill for ATI, but Nvidia will kick ass in the "convert video it widget" benchmarks. That is something they can be proud of, it uses physics, cuda, and pixie dust. Hell, it probably sells tens, maybe hundreds of GPUs.

Q3/Q4 and likely Q1 are going to be very tough for NV.

Then again, I said that a while ago.
http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture

-Charlie


specs aren't everything, if the architecture has changed, look at the rv670 to the rv770, if we look at just the specs, well they would seem like they would still have similiar performance ratios of the g92 to the gt200, but that didn't happen with the rv770.

If you think nV hasn't learned anything form what happened to the gt200, I think that is a bit shallow. I agree about the size wall but in all honesty the watts per mm is still better then what we see on the rv770, idle and load. For chips that are much larger then competing parts with performance above still have similiar power envelopes.

40 nm saving 25% power doesn't automatically translate to 25% more transistors either or vice versa, its all about the engineering of the part. Again the 512bit bus doesn't have anything to do with power consumption, although in general more die does increase power usage in general, nV has found ways around this in the gt200. Even if they use GDDR3 memory (which probably unlikely for gt300), they still have a power advantage.

If this is what you are basing your stories on, I suggest you talk to some of the engineers here because what you're saying is not correct. You are basing it on conjecture and not actuality.
 
Last edited by a moderator:
Also, one of us realizes that NV is at a power wall, reticle size wall, and is moving in the wrong direction (generalization) for graphics performance. One of us actually gets the science behind the chips, and has a background of chemistry, chemical engineering, physics and CSCI (plus a lot of biology and genetics).

Which one would that be? :p That's a pretty bold claim to be making. Implying that you're smarter than Nvidia's entire engineering team. Bravo.

Take some out for inefficient use of area to support GPGPU, and then a bit more to support DX11, and lets just call it back at 2x performance for a 500mm^2 die.

Ah so Nvidia's move towards generalization is in the wrong direction, but Larrabee's generalization is just peachy. But of course, Larrabee's generalization is the good kind! :LOL:

Charlie, it's hard to take you seriously when all you preach is doom and gloom for Nvidia and blue skies for AMD. Your "technical" analysis is cursory and simply projects the worst possible outcome for anything Nvidia is doing.
 
Theo's site is practically paid for by nV (Palit challenge, headlines as "Nvidia’s $50 card destroys ATI’s $500 one") I wouldn't expect anything but green news from BSonVnews. He's getting fed a lot of bogus stuff regarding ATI (imminent launch of the Radeon 5600 back in January).

Fudo is much the same, two weeks before the 4890 launched he wrote a piece about how the 4890 didn't have higher clock speeds but different shaders etc. But making 3 posts by different people on things that are true it drowns out their bug rumour.
I do like fudzilla though, it has a lot of other news etc. but a lot of it feels like captioning other news blurbs.


That is true, but I was more interested in the way the posted at Inq vs. now. It was all doom and gloom before, but now they are much neutral in thier approach.
 
That is true, but I was more interested in the way the posted at Inq vs. now. It was all doom and gloom before, but now they are much neutral in thier approach.

Maybe all this involves a Gypsy fortune teller? (I see green, an N and bankrupcy!)
 
So where are we at now with predictions of Nvidia's collapse? That OEM's are going to sue for compensation for faulty chips all the way back to NV4x and the resulting fines/payments will wipe out all of Nvidia's cash and other assets? Maybe they can get in touch with AMD's creditors, those guys are suckers for a good deal.

The OEM's can't be too pleased with the situation or Nvidia's handling of it. But do we have any reliable source for Nvidia's exposure should this escalate? I can swear I saw a $1000/unit estimate somewhere....
 
40 nm saving 25% power doesn't automatically translate to 25% more transistors either or vice versa, its all about the engineering of the part. Again the 512bit bus doesn't have anything to do with power consumption, although in general more die does increase power usage in general, nV has found ways around this in the gt200. Even if they use GDDR3 memory (which probably unlikely for gt300), they still have a power advantage.

If this is what you are basing your stories on, I suggest you talk to some of the engineers here because what you're saying is not correct. You are basing it on conjecture and not actuality.

Not the sharpest bowling ball of the bunch, are you? 512b bus takes more power than a 256. GDDR5 takes more power than GDDR5 for similar bit widths.

As for transistor -> power, you are right, but as a general rule, it is a good starting point. In very parallel architectures, it tends to work out fairly well as an estimator. Less so for monolithic cores.

Are you suggesting that adding transistors linearly will not increase power fairly linearly? Are you suggesting that for similar bit widths, GDDR5 does not take more power than GDDR3? Look at the numbers for the 4850 vs 4870, they are different how? OC or downclock them to the same frequency, and the difference is what again?

-Charlie
 
Not the sharpest bowling ball of the bunch, are you? 512b bus takes more power than a 256. GDDR5 takes more power than GDDR5 for similar bit widths.

As for transistor -> power, you are right, but as a general rule, it is a good starting point. In very parallel architectures, it tends to work out fairly well as an estimator. Less so for monolithic cores.

Are you suggesting that adding transistors linearly will not increase power fairly linearly? Are you suggesting that for similar bit widths, GDDR5 does not take more power than GDDR3? Look at the numbers for the 4850 vs 4870, they are different how? OC or downclock them to the same frequency, and the difference is what again?

-Charlie


when did I say that? Its not purely linear when you are talking about the full die, because the entire die isn't the damn bus, or are you suggesting it is? Last time I looked the bus only took up around 10% of the die on the gt200, now not just that you don't need to talk to someone you are generalizing when you don't need to. In very parallel actchitectures it isn't, because the die of the gt200 doesn't have the same frequency in all regions, nor does the AMD's cores but to a lesser degree.

What are you suggesting "forget the clocking of those transistors, the amount of transistors, effeciency of the parts in question"?
 
Last edited by a moderator:
Not the sharpest bowling ball of the bunch, are you? 512b bus takes more power than a 256.
What takes the most power: 1x512 or (2x256+Bridge)? I think that should be pretty obvious.

Anyway Charlie, I hope you'll do more of a mea cupla if you're wrong here than you did with G80 vs R600. If you knew about what they're doing at the back-end, what generalization sweet-spot they're aiming at, and how they'll scale the design for derivatives this might be different. But you don't, so you really shouldn't pretend to understand the dynamics of the DX11 gen even if you did have some info.

Regarding power when adding transistors: it's a bit more complex than that in reality. Here's the truth: assuming leakage isn't too absurdly high (maybe not the best starting point for 40nm GPUs!), then more parallelism via more transistors usually means *lower power* for a *given performance target*. One word: voltage. Here's a pretty graph that brings the point home: http://dl.getdropbox.com/u/232602/Power.png - this is obviously not the main goal of adding transistors in GPUs, but in the ultra-high-end with the risk of thermal limitation this factor needs to be seriously considered along with a few other things.
 
Status
Not open for further replies.
Back
Top