NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
Hmmm wonder if this is the M1710 replacement I've been waiting for. Yes I once stupidly thought a gaming notebook actually made sense but in my defense I got a ridiculous deal on it through my employer.

And yeah, the Intel Atom pricing thing was never viewed as illegal. But certainly bitch worthy.
 

Not that I'm convinced it's going to play out as Charlie speculates, but you'll notice that he clearly referred to Nvidia being squeezed out of Dell's desktop lineup (which is quite apparent if you visit their online store), while speculating that notebooks wouldn't be far behind. The fact that this particular model (a design win from several months ago) sports a 9400M doesn't really invalidate his claim.
 
Not that I'm convinced it's going to play out as Charlie speculates, but you'll notice that he clearly referred to Nvidia being squeezed out of Dell's desktop lineup (which is quite apparent if you visit their online store), while speculating that notebooks wouldn't be far behind. The fact that this particular model (a design win from several months ago) sports a 9400M doesn't really invalidate his claim.


From what we hear, Dell's laptops and AIWs aren't far behind either.

that was from that article, I think he just noticed that Dell desktops were light on nV cards from thier website, this is due to probably better pricing from AMD to Dell, has nothing to do with Dell not using nV cards because of what happened with the bad bumps.
 
that was from that article, I think he just noticed that Dell desktops were light on nV cards from thier website, this is due to probably better pricing from AMD to Dell, has nothing to do with Dell not using nV cards because of what happened with the bad bumps.

Yeah i don't see a connection to bad bumps, unless DT cards actually do have problems. My default assumption is that the bumping problems are mostly in notebooks.

DK
 
Not that I'm convinced it's going to play out as Charlie speculates, but you'll notice that he clearly referred to Nvidia being squeezed out of Dell's desktop lineup (which is quite apparent if you visit their online store), while speculating that notebooks wouldn't be far behind. The fact that this particular model (a design win from several months ago) sports a 9400M doesn't really invalidate his claim.

Well, at least on the local german website, there's no such thing to be seen. Plus, Dell officially denied Charlies claim:
http://www.pcgameshardware.de/aid,6...ig-aus-dem-Portfolio-Update/Grafikkarte/News/

Dell said:
To ensure we deliver the best value to our customers, Dell offers a wide choice of products and features. Dell regularly uses both nVidia and ATi graphic cards in its systems, and will select components based on delivering the best value and productivity to customers. nVidia graphics can be found on a variety of Dell systems including the XPS and Alienware gaming desktops and laptops, Studio XPS 13 laptop, Studio desktop, and our all-in-one PCs.
 
My point was basically "we know how much 512b GDDR5 will consume, just look at R700, increase it for higher memory clock speeds and reduce it by whatever the PCIe bridge takes. And maybe also 1GB vs 2GB". So even assuming (unlikely) that NV engineers are too dumb to look at a GDDR5 specsheet, it's not hard to see that it's not such a big deal. Either way this is just a detail and it's best we don't focus too much on it...

Too dumb to look at the spec sheet? That strategy served their packaging people well.... :)

All I am trying to say is that if they switch to GDDR5 from GDDR3 while keeping the width the same, power used by the chip for memory IO will go up. From what I have seen on power, it is hardly a 'detail'. The only comparison you can reasonably make between the two architectures at the same width is the 4850 and 4870, and there is quite a difference there.

Forget the X2, that is 256 to 512, sort of. In the case of the hypothetical GT200 vs GT300, the width stays the same, making the ATI cards a much better comparison.

With regards to GT300 in general, Charlie, the problem is you made a whole bunch of bizarre claims in your GT300 article that are nearly certainly horribly wrong. And that's before we consider non-GT300 tidbits you got wrong, such as:
- Process node density. The node numbers are marketing, 55nm is not 0.72x 65nm it's 0.81x, the real density is available publicly and you should only look at that. Density from a full node to another full node is not fully comparable, but you should at least be looking at the kgates/mm² and SRAM cell size numbers.

Yes, I did simplify on purpose, I wasn't trying to do a doctoral level course on semiconductor manufacturing. If you want to go down that path, how about analysing how closely each chip follows those specs? Minimum size is an industry joke made for bragging rights. Try comparing a real chip made on TSMC 40 to Intel's 45 to see how closely the specs are followed when it matters.

Since there are specs, more specs, and reality, I use the quoted widths as an approximation. If you take into account all the DFM rules for each different application, you could write a very think book on the topic before you even got to the point.

However, if you think minimum drawn size on a process has anything to do with the end result chip, feel free to break out the electron microscope and show us. I did when I wanted to prove something. :)

- "The shrink, GT206/GT200b" - I can tell you with 100% certainty GT206 got canned and had nothing to do with GT200b. However it still exists in another form
- GT212 wasn't a GT200 shrink, it had a 384-bit GDDR5 bus and that's just the beginning. You have had the right codenames and part of the story of what happened for some time, which is much more than the other rumor sites/leakers, but you don't have the right details.

I may have some of the details wrong, but then again, the code names keep changing, getting re-used and modified. It is a bitch to keep track of them if you don't have a person you can directly ask. I try and keep as up to date as I can.

- "Nvidia chipmaking of late has been laughably bad. GT200 was slated for November of 2007 and came out in May or so in 2008, two quarters late." - it was late to tape-out, but it was fast from tape-out to availability (faster than RV770). G80 was originally aimed at 2005 for a long time but only taped-out in 2006, does that make it a failure? For failures, look at G98/MCP78/MCP79 instead.
- etc.

I am not sure why this is relevant to the discussion at hand, but yeah, I know. I know all about the MCP79, what a joke that was. I mean, look at the MacBookPro's Hybrid mode....oops.

Seriously though, if you want to list NV failures, this could be a very long thread.....


Then we get to GT300:
- "The basic structure of GT300 is the same as Larrabee" - if that's true, you need scalar units for control/routing logic. That would probably be one of the most important things to mention...

Slight overgeneralization for the audience I am writing for. Would you be happier if I said, "Both have a number of general purpose cores that are replacing fixed function units in order to be more efficient at GPU compute tasks, but give up absolute GPU performance in order to do so."?

- "Then the open question will be how wide the memory interface will have to be to support a hugely inefficient GPGPU strategy. That code has to be loaded, stored and flushed, taking bandwidth and memory." - Uhh... wow. Do you realize how low instruction bandwidth naturally is for parallel tasks? It never gets mentioned in the context of, say, Larrabee because it's absolutely negligible.

Yes I do, that is the whole point of SIMD you know. Who said anything about instructions though? You need to load the data, and that usually is an order of magnitude more bandwidth. Am I missing something, or are you putting words in my mouth? Did I mention instructions vs data anywhere?

- "There is no dedicated tesselator" - so your theory is that Henry Moreton decided to apply for early retirement? That's... interesting. Just like it was normal to expect G80 to be subpar for the geometry shader, it is insane to think GT300 will be subpar at tesselation..

Yes. The whole point is that if you have X shaders, and tesselation takes 10% of X, that leaves you with 90% of X to do everything else. That means if you flip on tesselation, your performance WILL go down.

If you have dedicated hardware to do it, you flip it on, and performance is more or less the same. Is that so hard to comprehend? I said that pretty clearly.

- " ATI has dedicated hardware where applicable, Nvidia has general purpose shaders roped into doing things far less efficiently" - since this would also apply to DX10 tasks, your theory seems to be that every NV engineer is a retard and even their "DXn gen must be the fastest DXn-1 chip" mentality has faded away.

No, I think the engineers are very smart. Management on the other hand seems to be..... well... not so smart. They repeatedly mouth off is stupid ways, and the surrounding yes-men make it happen no matter how daft the plan is. There is no free thought at Nvidia, it is a firing offense.

When JHH says X, they make X happen no matter how much it costs, or how much it hurts in the long run. Go back and talk to the financial analysts, ask them what NV was promising them for last summer, this summer, and next. Then look at the architectures of the GT200 and GT300. See a resemblance?

Then stop and think about how the GPGPU market did not comply with the wished of Nvidia. The architecture is there, the market isn't. If the market had followed JHHs prognostications, it would have been much better for the company.

If I took the time to list all this, it's so that I don't feel compelled to reply in this thread again too much. It's important to understand that it's difficult to take you seriously about NV's DX11 gen when that's your main article on it, and I wasn't even exhaustive in my list of issues with it and didn't include the details I know. Their DX11 gen is more generalized (duh) but not in the direction/way you think, not as much as you think, and not for the reasons you think. Anyway, enough posting for me now! :)

I don't always print what I know. I didn't print when ATI got R870 silicon back, nor did I print up when it taped out. Hell, I didn't even print up the specifics of the chip, but I do know them. To a lesser degree, I know the NV specs.

I also know I generalize. I can't print out the entire history of the semiconductor industry in every article. Sorry.

All this said, in 2-3 days, you will understand what trouble NV is in. Maybe I will drop you a hint Monday.

-Charlie
 
Too dumb to look at the spec sheet? That strategy served their packaging people well.... :)

link it I was under the impression gddr5 uses less power per clock then gddr3 and it is considerably less.

All I am trying to say is that if they switch to GDDR5 from GDDR3 while keeping the width the same, power used by the chip for memory IO will go up. From what I have seen on power, it is hardly a 'detail'. The only comparison you can reasonably make between the two architectures at the same width is the 4850 and 4870, and there is quite a difference there.

You forget about the core clock difference too? Oh yeah hmm that won't increase power consumption.

Forget the X2, that is 256 to 512, sort of. In the case of the hypothetical GT200 vs GT300, the width stays the same, making the ATI cards a much better comparison.

You hypothetical situations just make me :LOL:


Yes, I did simplify on purpose, I wasn't trying to do a doctoral level course on semiconductor manufacturing. If you want to go down that path, how about analysing how closely each chip follows those specs? Minimum size is an industry joke made for bragging rights. Try comparing a real chip made on TSMC 40 to Intel's 45 to see how closely the specs are followed when it matters.

Since there are specs, more specs, and reality, I use the quoted widths as an approximation. If you take into account all the DFM rules for each different application, you could write a very think book on the topic before you even got to the point.

However, if you think minimum drawn size on a process has anything to do with the end result chip, feel free to break out the electron microscope and show us. I did when I wanted to prove something. :)


You don't know how to write anything other then simplified :devilish:

I may have some of the details wrong, but then again, the code names keep changing, getting re-used and modified. It is a bitch to keep track of them if you don't have a person you can directly ask. I try and keep as up to date as I can.


key word some, hmm more like a lot



I am not sure why this is relevant to the discussion at hand, but yeah, I know. I know all about the MCP79, what a joke that was. I mean, look at the MacBookPro's Hybrid mode....oops.

Seriously though, if you want to list NV failures, this could be a very long thread.....

Slight overgeneralization for the audience I am writing for. Would you be happier if I said, "Both have a number of general purpose cores that are replacing fixed function units in order to be more efficient at GPU compute tasks, but give up absolute GPU performance in order to do so."?

Seriously though we should link out all your failures, it would be a very long thread as well :LOL:

Yes I do, that is the whole point of SIMD you know. Who said anything about instructions though? You need to load the data, and that usually is an order of magnitude more bandwidth. Am I missing something, or are you putting words in my mouth? Did I mention instructions vs data anywhere?

are you actually talking about memory bandwidth in your original comment?


Yes. The whole point is that if you have X shaders, and tesselation takes 10% of X, that leaves you with 90% of X to do everything else. That means if you flip on tesselation, your performance WILL go down.

only if its a shader limited program, also depending on how the pipeline is structured it might not have ill effects.

If you have dedicated hardware to do it, you flip it on, and performance is more or less the same. Is that so hard to comprehend? I said that pretty clearly.

really that was very profound, performance still drops with ATi's tesselator, its around 30% performance hit when going to heavy scenes. Also stream out performance is very important as well.

https://a248.e.akamai.net/f/674/920...sselationForDetailedAnimatedCrowds_SLIDES.pdf


page 33 for performance analysis, you want really stop the BS or what?

Not going to bother with the rest of your post, useless.
 
Last edited by a moderator:
Since I'm naturally going to be in disagreement with Charlie (thumbs up for bothering to reply though!) over our discussion, I thought I'd a nice guy and partially stand up for him on this bit by Razor which got deleted for flames I didn't include in my quote:
Razor1 said:
[...] your article stated Dell isn't dealing with nV anymore, and two days later they launch a new notebook! [...]
Charlie's point is as follows: notebook design wins happen X months in advance, desktop design wins happen Y months in advance with X>Y. The decision to go rough on NVIDIA was taken sometime between X and Y, and so it affects new desktop design wins but will only affect new notebooks in a couple of months.

I don't know if this is true, but I believe that the latest events do not contradict it yet. On the other hand, I would find it bizarre that NV would claim they'll increase their share for the next-gen notebook platform then though, because that comes out later than usual and I doubt (X-Y) is greater than the time gap between now and then. So my opinion remains that Charlie is involuntarily exaggerating on this matter. But we'll see...

Groo said:
Forget the X2, that is 256 to 512, sort of. In the case of the hypothetical GT200 vs GT300, the width stays the same, making the ATI cards a much better comparison.
Okay, let me try this one last time: it's obvious that GDDR5 takes at least as much power as GDDR3 and often more at common clock speeds (e.g. 900MHz for 3 and 1800MHz for 5). I never did contest that. Although apparently Razor does or misunderstood what you said as to mean per-clock, which is where the confusion comes between the two of us comes from....

My point is you can hit the 300W TDP number with a 512-bit GDDR5 bus, because R700 did just that. So you shouldn't exaggerate the importance of it, and in fact 1GB of GDDR5 on a 512-bit bus nearly certainly takes less power than the 1796MB of GDDR3 on an aggretate 896-bit bus (ala GTX295). Furthermore, assuming there's a X2 variant of the RV870, 2x256-bit GDDR5 plus a bridge (integrated or not) is by definition going to take the same or more power than 1x512-bit GDDR5! Yes, that's a massive amount of power either way, but NV is not necessarily at a disadvantage on this specific point. You're mixing up various claims because you assume that RV870 will be competitive with NV's DX11 flagship like RV770 was with GT200, but I think that should be debated separately.

Since there are specs, more specs, and reality, I use the quoted widths as an approximation. If you take into account all the DFM rules for each different application, you could write a very think book on the topic before you even got to the point.
That's a reasonable position for 65/55->40, but for 65->55 that's an optical shrink assuming you prepared for it from the start. RV630->RV635 would be a good example of that.

However, if you think minimum drawn size on a process has anything to do with the end result chip, feel free to break out the electron microscope and show us. I did when I wanted to prove something.
Well in my experience, SRAM cell sizes on TSMC 65nm for handheld chips are more or less what you'd expect them to be given the natural overhead of a SRAM array and the claimed cell size. But yeah, once you get into more complex cases where the design isn't just optimized for density and maybe leakage, the marketing claims can easily fly out of the window. And real-world kgates/mm² are, of course, yet another problem. Anyway, let's move on...

I may have some of the details wrong, but then again, the code names keep changing, getting re-used and modified. It is a bitch to keep track of them if you don't have a person you can directly ask. I try and keep as up to date as I can.
Well, there are some changes definitely, but I wouldn't go that far. I think there have been more cancellations, really; for example, the iGT209 integrated GPU which was part of MCP7C. Which, BTW, was the first MCP to support VIA Nano which is why we have to wait for Ion2 there now ;)

I am not sure why this is relevant to the discussion at hand, but yeah, I know. I know all about the MCP79, what a joke that was. I mean, look at the MacBookPro's Hybrid mode....oops.
Yeah, I wasn't even thinking about the non-chip HW parts too. G98 had the most hilariously total HW failure, but let's not get into that...

Slight overgeneralization for the audience I am writing for. Would you be happier if I said, "Both have a number of general purpose cores that are replacing fixed function units in order to be more efficient at GPU compute tasks, but give up absolute GPU performance in order to do so."?
You haven't said whether there's scalar logic in there (see my original point) and how job creation/routing/etc. works, so no, I would not be much more happy. Remember: adding a scalar part ala Larrabee doesn't magically make you more efficient at today's CUDA workloads in HPC. It's a complete paradigm shift. It requires significantly re-educating your developer base. And if it's not actually used much, it would actually *reduce* GPGPU efficiency (even more than GPU efficiency since presumably internal drivers would make an effort to use it!)

A general-purpose core isn't something automatic where you just need to add some interrupt hardware and attach to a ring bus and tadam, it can do everything on its own! There are several other things, including the generating new threads and SIMD isn't going to cut it for control logic, which you are arguing NVIDIA wants to replace. Both are fundamental new paradigms, not efficiency increases for any existing workloads. I would be very surprised, both positively (because of the developing opportunities) and negatively (because of the die size), if either of those happened this generation.

I would expect NV to invest in that to be able to do it around 2013-2014, and that's when the importance of the interconnects increases dramatically and people like Dally become even more valuable. But I don't think they'll do it for DX11.

Yes I do, that is the whole point of SIMD you know. Who said anything about instructions though? You need to load the data, and that usually is an order of magnitude more bandwidth. Am I missing something, or are you putting words in my mouth? Did I mention instructions vs data anywhere?
I think you're missing something. Let's say you did triangle setup and interpolation in hardware. That doesn't mean you need to write everything to memory one more time and then read it again; you could just keep it in a small SRAM block, basically a larger FIFO than there is right now. Or you could keep it in L2 cache if you're assuming that has become as generic as on Larrabee.

Yes. The whole point is that if you have X shaders, and tesselation takes 10% of X, that leaves you with 90% of X to do everything else. That means if you flip on tesselation, your performance WILL go down.

If you have dedicated hardware to do it, you flip it on, and performance is more or less the same. Is that so hard to comprehend? I said that pretty clearly.
You said that very clearly, and my point is that's a ridiculous assumption because NVIDIA has actually at least as much background in tesselation hardware than AMD via Henry Moreton, and I doubt there is someone as pro-tesselation high-up at AMD as Henry is at NV. Even if NV moved, say, input assembly/setup/interpolation/blending to the shader core (which I don't believe except for blending BTW), I refuse to believe they would be moving either rasterization or tesselation to it.

Then stop and think about how the GPGPU market did not comply with the wished of Nvidia. The architecture is there, the market isn't. If the market had followed JHHs prognostications, it would have been much better for the company.
Actually, if I compare it to their expectations in the G80 timeframe, they've been a bit slow to revenue but they weren't that far from reality. Same for Tegra, really (if you know about some of the design wins, which I do). One thing where they were much farther from reality is QuadroPlex (so much for a hundreds of millions business - oops!), but I don't think anybody in the press or analyst community cares.

All this said, in 2-3 days, you will understand what trouble NV is in. Maybe I will drop you a hint Monday.
I always like juicy info, so looking forward to it... :)
 
Bye all

Guys,
I came here to respond to a few posts, but found too much deleted, and stuff by certain other people left here. With this kind of biased administration, it simply isn't worth my time.
I am not going to post or likely even read this forum for a while, I never say never. It isn't worth it any more, and I have too much on my plate.

-Charlie
 
Fuck, I broke the modjo...On a serious note, if deleting a bait post and a flame post (flaming you, BTW) makes you feel censored Charlie, there's not much that can be done. Letting this turn into a FFA flame-fest wouldn't be exactly productive.
 
Last edited by a moderator:
I find it rather strange that the Dell news of no GeForces in desktops only reaches the news sites now.
I bought a Dell a few months ago, and already had no alternative than to order a GeForce 9800GTX+ separately and install it myself.
They only offered Radeons or Quadro cards. Quadro's were only in the workstation series. I actually did get a workstation (Precision), but then replaced the low-end Quadro with a 9800GTX+.

So what I'm saying is: Dell hasn't offered GeForces in desktops for months already, it didn't happen just a few days ago.
I already mentioned it here:
http://forum.beyond3d.com/showpost.php?p=1292147&postcount=612
 
I find it rather strange that the Dell news of no GeForces in desktops only reaches the news sites now.
I bought a Dell a few months ago, and already had no alternative than to order a GeForce 9800GTX+ separately and install it myself.
They only offered Radeons or Quadro cards. Quadro's were only in the workstation series. I actually did get a workstation (Precision), but then replaced the low-end Quadro with a 9800GTX+.

So what I'm saying is: Dell hasn't offered GeForces in desktops for months already, it didn't happen just a few days ago.
I already mentioned it here:
http://forum.beyond3d.com/showpost.php?p=1292147&postcount=612


Yup. Its been that way for a while. Dell typically chooses what it can get its best prices on. Right now they seem to be doing that with the AMD cards.
 
Confusing....one of Charlie's posts gets deleted by AlexV (?), then Arun posts that he undeleted it, then Arun's post is itself deleted and we still don't know which of Charlie's posts was in question. Oh, well.

For what it's worth, I encourage Charlie to stick around and continue to provide some forum spice and colour commentary of the articles he writes for the Inq. He's got a provocative style, but I've seen nothing disrespectful in his behaviour.
 
Yeah, actually I didn't undelete anything, Chrome screwed up. And the post that was deleted wasn't the one I thought, instead it was one which said "You know that rant earlier when I accused someone on this forum of willfull ignorance and stupidity. Pat yourself on the head, you are now the champ there." - I've seen worse, but given that it did bait Razor who flamed in response, it didn't add anything to the thread. And since Charlie is gone, why bother...
 
Will AMD be able to produce enough chips for Dell? I was under the impression that they discontinued 4830 and 4850 while making the 4770. And they have reportedly had severe problems with getting enough 4770's. Or will Dell be using <- 4600's and 4870 and upwards mostly?
 
Will AMD be able to produce enough chips for Dell? I was under the impression that they discontinued 4830 and 4850 while making the 4770.

That's incorrect, the 4850s aren'd discontinued yet AFAIK.
 
Status
Not open for further replies.
Back
Top