NVIDIA Maxwell Speculation Thread

no-X said:
I agree that people are bashing Charlie unjustly.
His main problem (other than tinted glasses) is a stunning technical incompetence and an unshaken trust in sources that are sometimes blatantly misguided. It's a dangerous combination.

He truly believes he knows what he's talking about, yet the way he write about about schedules, verification processes, fabrics and (OMG) yields is so desperately naive and simple-minded, it's obviously he's clueless. I once tried around here to bring correct some of his verification ideas but there was just no point. Anyway, his audience seems to eat it up, so why stop, right?
 
As I know the "2014" roadmap is known since June/July 2011 (here are all 3 versions I found - from above: 2010, June/July 2011 and the last one: May 2011). Anyway, I agree that people are bashing Charlie unjustly. There were many websites spreading fakes and bullshit befor the GCN's release, but nobody criticises them.

That's probably because of two things:
a) those sites have far less klingons attached, so it's less appaling;
b) they don't attempt to masquerade into something they're not, they just try to get by.
Take those two away and nobody would have much of a problem. Sadly, it doesn't seem likely.
 
silent_guy: That's right, but I still believe, that spreading of obvious nonsense like the XDR fakes, renaming fakes and other similar hoaxes created more damage. Though those sites, which spreaded these BS's publicly aren't criticised at all. It seems to me, that presenting of incorrect picture using nice words and good form is more tolerable (socially), than presenting of a truer version using not so acceptable form.

Charlie's technological knowledge is limited, but he tries to understand, why is something possible or impossible (he's sometimes right, another time wrong). On the other hand the fake-spreaders even didn't care, if anything like that is technologically possible - but nobody cares and their reputation stays untouched...
 
It seems to me, that presenting of incorrect picture using nice words and good form is more tolerable (socially), than presenting of a truer version using not so acceptable form.

Well that's true in general, not just on the internet. The other reason is that Charlie's campaign against nVidia is running for several years now, it's one continuous stream of hatred and bitterness. Those other random sites posting garbage from one release to the next are quickly forgotten. Many don't have a reputation to begin with.
 
Back to the topic at hand: why do you think Nvidia revealed the existence of Maxwell at GTC in the first place?

There's no upside in showing a schedule: at that time, the spec was probably still a lose collection of slogans about what they want to fix/improve, so it's impossible anyway to be exact. (Never mind the fact that silicon schedules are by default a year late, if you've already taking that lateness into account. ;))

The only reason I can come up with is that it was a way to reassure HPC people that it's worth investing in development for GPU computing because there's a multi-year roadmap. That's it.

I have to laugh whenever somebody feels the need to use the word 'lies' and what not when talking about schedule. Just stupid. AMD told us in July or August that Tahiti would be available by end of the year. I'm sure they were shooting for availability before Xmas. It didn't happen. Is that a lie? If schedules just 5 months out are so difficult to keep, with silicon in hand, how accurate do you expect them to be 3 years out?
 
Back to the topic at hand: why do you think Nvidia revealed the existence of Maxwell at GTC in the first place?

There's no upside in showing a schedule: at that time, the spec was probably still a lose collection of slogans about what they want to fix/improve, so it's impossible anyway to be exact. (Never mind the fact that silicon schedules are by default a year late, if you've already taking that lateness into account. ;))

The only reason I can come up with is that it was a way to reassure HPC people that it's worth investing in development for GPU computing because there's a multi-year roadmap. That's it.

I have to laugh whenever somebody feels the need to use the word 'lies' and what not when talking about schedule. Just stupid. AMD told us in July or August that Tahiti would be available by end of the year. I'm sure they were shooting for availability before Xmas. It didn't happen. Is that a lie? If schedules just 5 months out are so difficult to keep, with silicon in hand, how accurate do you expect them to be 3 years out?

Well that slide was released in 2010, after Fermi was launched. It also stated that Tesla had been released in 2007 and Fermi in 2009. Pretty sure those are clear lies. :LOL:

As for the reasons you cite for mentioning Maxwell, I think they're spot-on. Events like GTC are meant (among other things) to attract developers and investors, to convince them that NVIDIA's strategy is sound and that you should do things their way, using their products. This roadmap shows commitment, even if it's just a slide and a pretty speech, so that helps. Plus, it doesn't really cost NVIDIA anything. I mean it was already pretty obvious that they had new products in development, and revealing their codenames doesn't really do any damage.
 
If schedules just 5 months out are so difficult to keep, with silicon in hand, how accurate do you expect them to be 3 years out?
Considering nV doesn't seem to even know when their past chips rolled out, one shouldn't expect anything from it :LOL:
 
Maybe they were just "schedule-aligned", i.e. scheduled dates without the delays that inevitably occurred.
 
Well that slide was released in 2010, after Fermi was launched. It also stated that Tesla had been released in 2007 and Fermi in 2009. Pretty sure those are clear lies. :LOL:

And why does anyone care? If they marked them (past launches) as later it would actually make their future releases look better as though they were closer together given the same launch dates.
 
Nice find, can't view the website though - no silverlight support on my phone :( Any indication of which features will be introduced with Maxwell - i.e. not with Kepler?
 
Hmm... on slide 24...
Echeleon on 10nm prejected as a 290mm chip... Pretty smal for nVIDIA standards... I wonder when the "smaller chip" route will start... Any chance Kepler might be smaller than we think? :D
 
Hmm... on slide 24...
Echeleon on 10nm prejected as a 290mm chip... Pretty smal for nVIDIA standards... I wonder when the "smaller chip" route will start... Any chance Kepler might be smaller than we think? :D

Echelon may be a HPC-only chip and chips aimed at the gaming market may have a different architecture.
BTW, they cut their performance targets from 20TF DP (back in Nov 2010) to 16 TF per chip.
 
Really? Ok, looks like you'll get your GCN clone....eventually :p
The SM lanes still work entirely different. And I guess I get now the motivation for the reg file caches a bit better, even when I'm still not completely convinced about the layout they presented so far.
 
Hmm... on slide 24...
Echeleon on 10nm prejected as a 290mm chip... Pretty smal for nVIDIA standards... I wonder when the "smaller chip" route will start... Any chance Kepler might be smaller than we think? :D

Echelon is just a concept, not an actual planned ASIC.
 
Back
Top