NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
Which one would that be? :p That's a pretty bold claim to be making. Implying that you're smarter than Nvidia's entire engineering team. Bravo.

Wow, you really are lacking in the reading comprehension department, aren't you. I said nothing of the sort. Go back, read it again, slowly, and try to figure out what was said. I know it makes your head hurt, but try. Really.

Ah so Nvidia's move towards generalization is in the wrong direction, but Larrabee's generalization is just peachy. But of course, Larrabee's generalization is the good kind! :LOL:

Exactly. If you don't understand this, go back to your console games and shiny things. If you don't understand why, you don't understand the semiconductor business. One is about integration, the other is about adding thing to a doomed paradigm. Unless Nvidia can not only make, but bring to market, an x86 part, they are dead.

Charlie, it's hard to take you seriously when all you preach is doom and gloom for Nvidia and blue skies for AMD. Your "technical" analysis is cursory and simply projects the worst possible outcome for anything Nvidia is doing.

No, my technical analysis is very deep, I just don't post everything I know. Please do write, with technical specifics, why you think GT300 will be the equal to or better than the R870. Start with the architectures, then move on to the fabrication aspects, then lets discuss profit margins. You do know all this stuff don't you, not just basing it on a random German site that is known for making stuff up, right?

If you can't do the above, your critical statements make you look pretty stupid. So, that said, since you have to know the stuff above to not be a moron, please educate us. The reply button is below, use it to educate the unwashed masses here.

-Charlie
 
Lol, so Intel's bundle pricing for Atom+chipset makes sense to you from any other standpoint than simply deterring Atom only sales? I'd like to see someone make that argument.
What are the facts on the pricing? When asked JHH doesn't have any. Maybe he does, but he hasn't said them so far. Maybe I've missed him coming clean.

Nvidia has obviously been pretty shady during this entire episode but people dismiss Inq articles as fluff pieces simply because the delivery is full of bias, emotion and general bitterness. So the message that comes across is that the author hates Nvidia for whatever personal reasons and not that he's trying to educate or protect the innocents.
I guess his readers are split multiple ways - not all of them treat what he writes as useless.

Is the "mainstream" press giving Nvidia a free pass on this?
They report the odd headline, when it's fed to them it seems.

Which journalists do you think are actively engaged on this story?

And Charlie is simply upset that people aren't aware of the magnitude and gravity of Nvidia's underhandedness? He obviously has his connections in the industry but he's just one guy with an obvious agenda. It's not easy to figure out where all the pieces fall simply based on Inquirer articles. But I imagine some folks here have more info than others and have more reason to be outraged (hearkening back to Shillgate :LOL:).
It seems to me Charlie's just picked on a relatively easy target (though it does seem to require quite a bit of digging :p). His "obvious agenda" is simply being bothered to do something with a stream of sewage that's pumping where it shouldn't. Plus it sounds like he's having a ball.

To be honest, I'm still not sure whether Charlie is angry that Nvidia is getting away with murder or if he's gloating that they are about to get their comeuppance. Either way, it's probably inconsequential.
So inconsequential you're sticking your oar in.

Jawed
 
Exactly. If you don't understand this, go back to your console games and shiny things. If you don't understand why, you don't understand the semiconductor business.
Hey, I removed a flame post aimed at you, don't make this discussion too personal or insulting, especially when trinibwoy is as damn far from a 'console games and shiny things' guy as you can be and does have solid technical knowledge. I don't want to be forced to take sides... :)
Please do write, with technical specifics, why you think GT300 will be the equal to or better than the R870. Start with the architectures, then move on to the fabrication aspects, then lets discuss profit margins. You do know all this stuff don't you, not just basing it on a random German site that is known for making stuff up, right?
If you imply anyone in the world today knows all those factors in sufficient depth for *both* GT300 and R870, then I'm not sure what I'm supposed to tell you. Assuming otherwise (which I am), as for the architecture & fabrication aspects, I hinted at a few critical factors I know about in my previous post. Talking about profit margins at this point is a bit, let us say, premature.

Trini's point is simply that you're making a lot of assumptions about things. You could be right, but you're not using the right tone to talk about things you clearly aren't or shouldn't be certain of. I understand that when you write on The Inq - it's a tabloid business model, after all - but it's not surprising people are even less tolerant of it when posting on a forum.
 
When the specs for both cards come out, you will see. If you think about it, NV at 500mm^2 has about the biggest card you can reasonably make and sell profitably in the price bands they are aiming at.

With a shrink, they will have about 2x the transistor count, so about 2x the shaders.

ALUs should scale rather better than transistor count for two basic reasons:
  1. they're memory intensive compared to most of the rest of a GPU (exception being L2 texture cache, but that seems to scale with memory channels more than anything), and memory scales really well
  2. I think the double-precision in GT200 was a complete kludge that was tacked-on as an emergency measure. Done properly it should have relatively little overhead, meaning that a whole pile of GT200's wasted transistors are irrelevant in any extrapolation
In addition to this it appears NVidia will be moving the space-hungry DDR physical IO off die onto a hub chip. That could save a wodge of space. If the ROPs end-up on that chip (sort of like bastard offspring of Xenos's EDRAM daughter die) then that's a monster amount of space saved on die.

That then makes for a monster dense Tesla chip, with no messing with ROP functionality as it is partnered by a dedicated memory hub chip that doesn't have ROPs on it.

So instead of GT200 being ~25% area ALUs, GT300 could be 80% area ALUs (rest being texturing, control, video, IO-crud). This would be quite an eye-opener :D

Indeed, a bit genius.

ATI on the other hand can effectively add in 4x the transistors should they need, but 2x is more than enough to keep pace, so they will be about 250mm^2 for double performance. Power is more problematic, but if you need to throw transistors at it to control power/leakage better, ATI can do so much more readily than NV.
Ironic: if I'm right about architecture then NVidia's asymmetric dual-chip configuration is superior to ATI's symmetric one. GT300 will be a whole lot slimmer than expected while having the grunt to take on 2xATI, without any of that AFR bullshit.

Jawed
 
Is that like when NVidia relented and said "you don't need our chips to do SLI"? After years of "bundling".

Jawed

No it is a lot more intelligent than that was. The margins were higher on the GPUs anyway so nvidia should have done that earlier, though there were technical issues despite some people's assertions that all was milk and honey.
 
Ironic: if I'm right about architecture then NVidia's asymmetric dual-chip configuration is superior to ATI's symmetric one. GT300 will be a whole lot slimmer than expected while having the grunt to take on 2xATI, without any of that AFR bullshit.
While I hope you're right (I'm AFR hater numero uno, I try not to mention it too much publicly nowadays and won't in the future either because I don't want to seem unfair to AMD), I suspect you are not at least for the GT300 generation because NV has been insistent recently, both publicly and privately, that they would go for a big chip then scale down rapidly.

If you could have an external interface that's more area-efficient than GDDR5 (Rambus? heh) then certainly this could make financial sense given the wafer price difference between, say, 40nm and 90nm. However, as I said, I'm a bit skeptical this will happen in practice (especially for the multi-GPU sharing part)
 
Go back, read it again

My reading comprehension is fine thank you. I actually read everything you write several times out of necessity.

Exactly. If you don't understand this, go back to your console games and shiny things. If you don't understand why, you don't understand the semiconductor business. One is about integration, the other is about adding thing to a doomed paradigm. Unless Nvidia can not only make, but bring to market, an x86 part, they are dead.

Irrelevant jabs aside (wtf did you get that I was some console game lover and so what if I was? :LOL:). It's obvious that chipsets are going by-bye but that has no bearing on the potential success of GT300. We certainly aren't going to have that kind of performance integrated anytime soon. And I agree with you that Ion doesn't have a future so I'm not sure why you're playing the integration card.

No, my technical analysis is very deep, I just don't post everything I know.

Of course it is, I'll take your word for it. That's why your articles are full of such enlightenment.

Please do write, with technical specifics, why you think GT300 will be the equal to or better than the R870.

Oh Charlie, you're too cute. What german site are you referring to anyway? Hardware.info? No I just wait for Konkort to post the articles here.

Anyway, I don't claim to know things that are impossible to know. That's your kettle of fish. You make sweeping generalizations about things you have no knowledge of or control over. So any criticism of your unsupported rants has to be accompanied by detailed specifications of GT300 and RV870 now? Well, don't we think we're important. How about you tell us why RV870 will triumph? It should be easy given the 14 degrees you listed earlier. And your crystal ball of course.
 
What are the facts on the pricing? When asked JHH doesn't have any. Maybe he does, but he hasn't said them so far. Maybe I've missed him coming clean.

There aren't any. But I guess assuming JHH is just pulling it out of his ass is more convenient.

They report the odd headline, when it's fed to them it seems. Which journalists do you think are actively engaged on this story?

None it seems. Which is why I asked the question.

Plus it sounds like he's having a ball. So inconsequential you're sticking your oar in.

Oh definitely, nobody would spend all this time and energy unless they're revelling in it. And I stick my oar in because it's not inconsequential to my amusement. You know I love to dance around Nvidia bonfires :)
 
What takes the most power: 1x512 or (2x256+Bridge)? I think that should be pretty obvious.

What I was trying to say is that 256b GDDR3 takes less power than 256b GDDR5. 512b GDDR3 takes less power than 512b GDDR5. That's it. If NV goes from 512b GDDR3 to 512b GDDR5, it will take more power.

Anyway Charlie, I hope you'll do more of a mea cupla if you're wrong here than you did with G80 vs R600. If you knew about what they're doing at the back-end, what generalization sweet-spot they're aiming at, and how they'll scale the design for derivatives this might be different. But you don't, so you really shouldn't pretend to understand the dynamics of the DX11 gen even if you did have some info.

I did screw that up, don't remember what I said after. That said, I have a fair understanding of GT300, and some DX11 info, basically a few GDC talks and chats with some of the architects.

Regarding power when adding transistors: it's a bit more complex than that in reality. Here's the truth: assuming leakage isn't too absurdly high (maybe not the best starting point for 40nm GPUs!), then more parallelism via more transistors usually means *lower power* for a *given performance target*. One word: voltage. Here's a pretty graph that brings the point home: http://dl.getdropbox.com/u/232602/Power.png - this is obviously not the main goal of adding transistors in GPUs, but in the ultra-high-end with the risk of thermal limitation this factor needs to be seriously considered along with a few other things.

As for the power, we shall see. I did say that you can burn transistors to lower power, as the chart points out, and that ATI is in a better position to do so because of relative transistor counts.

The one thing you are not taking into account is that more transistors may mean a lower power use, but if you scale the workload with the transistor count (roughly), it has the same effect as keeping the transistor count the same. If NV wants to make GT300 with double the transistors but the same performance level just to lower power from today's level, I would question their sanity.

-Charlie
 
As for the power, we shall see. I did say that you can burn transistors to lower power, as the chart points out, and that ATI is in a better position to do so because of relative transistor counts.

Relative transistor counts is one thing. But AMD doesn't hold a relative power density advantage. So they aren't in an inherently better position to just throw transistors at the power consumption problem while maintaining decent performance.

You're extrapolating GT200 and RV770 into the future and assuming similiar performance characteristics on future applications (the nature of which are always changing). That ignores the evolution of both the software and hardware side of things.

The tradeoffs of programmable vs ff hardware are well documented. But what's the basis of your religious belief that further generalization is the wrong path?
 
What I was trying to say is that 256b GDDR3 takes less power than 256b GDDR5. 512b GDDR3 takes less power than 512b GDDR5. That's it. If NV goes from 512b GDDR3 to 512b GDDR5, it will take more power.
Yeah, and what I was trying to say is that if R700 is already 2x256b GDDR5+Bridge, and that theoretically 1x512b GDDR5 should take the same or slightly less. And they can even get away with 1GB GDDR5 versus 2GB for R700 (and 2xRV870 I would assume) which is also on a single PCB. So I wouldn't worry too much about the memory power personally.

I did screw that up, don't remember what I said after. That said, I have a fair understanding of GT300, and some DX11 info, basically a few GDC talks and chats with some of the architects.
Well that's something at least :) Mind you, a chat with David Kirk in 2005/2006 wouldn't have told you much about G80's architecture. And it tells you nothing about what their area/transistor efficiency will be like.

As for the power, we shall see. I did say that you can burn transistors to lower power, as the chart points out, and that ATI is in a better position to do so because of relative transistor counts.
Well here you're making the big assumption that you can extrapolate GT200's area/transistor efficiency to GT300 (which doesn't seem fair). Or you're assuming higher generalization (which I have reason to believe you are exaggerating) and that resulting in lower efficiency.

The one thing you are not taking into account is that more transistors may mean a lower power use, but if you scale the workload with the transistor count (roughly), it has the same effect as keeping the transistor count the same. If NV wants to make GT300 with double the transistors but the same performance level just to lower power from today's level, I would question their sanity.
Yes, but naturally that's not my point. My point is they can set the clock speed (and therefore, voltage) at whatever they need to hit a ~300W TDP. If the question is "who has the performance crown", then throwing more transistors at the problem but with a lower voltage is not a bad way to make sure you achieve that in your thermal/power limit. Of course, that is a secondary factor to overall design efficiency.
 
There aren't any. But I guess assuming JHH is just pulling it out of his ass is more convenient.
If JHH had a point then he'd be served by facts. Because he doesn't have facts he uses innuendo by specifically leading everyone on with the "Intel's evil" line (which I don't doubt for a second, by the way, Inside Intel:

http://www.amazon.com/dp/0452276438/

is gob-smacking) but then using his friendly fireplace manner to get the listener to fill in the gaps on the awful injustice he's suffering ... as he tails off wistfully.

Jawed
 
It can't be that evil, When I was at the San Jose Marriot I took a stroll past the museum and intel's own basketball courts. There were bright young happy people shooting hoops there, hardly hellspawn.

on JHH, I can't see intel doing anything that nVidia hasn't done or is still doing
 
Well the facts are out there. Because somewhere there's an invoice with pricing for Atom bundles and another for Atom only. If OEM's were able to go out and buy unbundled Atoms at sane prices would Nvidia have any reason to say anything at all? In what fictional scenario would they be complaining if Intel's pricing scheme wasn't anti-competitive?
 
It can't be that evil, When I was at the San Jose Marriot I took a stroll past the museum and intel's own basketball courts. There were bright young happy people shooting hoops there, hardly hellspawn.

on JHH, I can't see intel doing anything that nVidia hasn't done or is still doing

It can't be that evil, When I was at the Munchener Marriot I took a stroll past the Technikmuseum and nazi jugend's own basketball courts. There were bright young happy people shooting hoops there, hardly hellspawn.

That was your daily godwin, have a nice day.
 
Well the facts are out there. Because somewhere there's an invoice with pricing for Atom bundles and another for Atom only. If OEM's were able to go out and buy unbundled Atoms at sane prices would Nvidia have any reason to say anything at all? In what fictional scenario would they be complaining if Intel's pricing scheme wasn't anti-competitive?
In what fictional scenario would NVidia be saying that only HP has a problem with bad bumps? Oh yeah, that's right, they were lying.

The pricing could simply be: Atom = $X; Chipset = $Y; Atom+Chipset < $X+Y.

People like to suggest Atom+Chipset neatly soldered to a board that's ready to go is < $X but where's the evidence?

And even if that's the case, if 99% of Intel's production is this pair of chips soldered to a board then it would actually cost extra to unbundle as that's a branch in Atom production. Would be nice if an insider would assess production/testing/distribution/management/inventory costs on this cheap stuff and how bundling versus unbundling works. Of course Intel is its own universe so prices don't necessarily make sense out here in the real world - particularly if Intel is being anti-competitive.

I'm not trying to deny the possibility of venal behaviour by Intel, just that NVidia's at least as capable so why take sides?

Jawed
 
No different to AMD's tears over Intel's shenanigans. Are you saying JHH doesn't have a valid point? Though I don't get the comparison. Nvidia is trying to make money. Charlie is just on some sort of personal crusade. All I'm saying is that if he really wants more people to buy into what he's saying he needs to cut back on the irrelevant flak. Unless that isn't his goal.....

It's not clear JHH has a point. JHH is attacking Intel for integrating the memory controller and graphics in the next rev of Atom, saying it's 'anti-competitive'.

The reality is that it's the INTegration of ELectronics, a trend that's been going for 30 years and has to do with semiconductor manufacturing and Moore's Law. It's simply inevitable that graphics gets sucked into the CPU/memory controller; AMD would have done it a long time ago (with K8 core), except they didn't have any GPU.

I mean, I didn't hear NV (or anyone really) complaining about AMD integrating the memory controller circa 2003. Everyone (including me) thought it was a good idea, and the market rewarded them accordingly.

I'm sure one reason Intel wanted to integrate the GPU was to make it even more difficult for 3rd parties, but it's not the sole motivator. I see it is a natural evolution of the semiconductor business that happens to disadvantage NV. And it's one that was eminently foreseeable.

The difference between bundled and unbundled atom pricing is an interesting one. I am not totally sure what to think about it...

DK
 
Relative transistor counts is one thing. But AMD doesn't hold a relative power density advantage. So they aren't in an inherently better position to just throw transistors at the power consumption problem while maintaining decent performance.

You're extrapolating GT200 and RV770 into the future and assuming similiar performance characteristics on future applications (the nature of which are always changing). That ignores the evolution of both the software and hardware side of things.

The tradeoffs of programmable vs ff hardware are well documented. But what's the basis of your religious belief that further generalization is the wrong path?

If you can throw transistors at a chip to lower absolute power, that costs die area. ATI is in a much better position than NV there, they can expand the die a lot if necessary. NV can't.

The religious side is that NV has been banging the GPGPU drum for years now, and skewing their chips that way for years. When GT300 was architected, NV was under the belief that GPGPU would be the way out of the entire video card dying problem. They were telling any analyst who didn't run away fast enough that that would be the case.

It didn't pan out that way, but the architecture was likely already decided.

-Charlie
 
Status
Not open for further replies.
Back
Top