NVIDIA Fermi: Architecture discussion

Payment dates for pre-order S2050 is May 31st, S2070 is July 31st, It will not be available before January 31st according to the same page. And here I was thinking that they'd launch Tesla before Geforce. (grrr sources)
 
Payment dates for pre-order S2050 is May 31st, S2070 is July 31st, It will not be available before January 31st according to the same page. And here I was thinking that they'd launch Tesla before Geforce. (grrr sources)

Why would you think that when Nvidia themselves said otherwise? It's not new Alexko.

http://www.nvidia.com/object/io_1258360868914.html

The Tesla C2050 and C2070 products will retail for $2,499 and $3,999 and the Tesla S2050 and S2070 will retail for $12,995 and $18,995. Products will be available in Q2 2010.

Editors’ note: As previously announced, the first Fermi-based consumer (GeForce®) products are expected to be available first quarter 2010.
 
Yeah, NVIDIA just spent resources on something irrelevant...Did you by any chance follow GTC at all ? I suggest you listen to Tech-Reports podcast from that time. Scott describes what can only be seen as excitement and interest in what Fermi is (and its potential) and the tools developed for its use, which btw were mostly requests from people working in that market.

Don't be so defensive. You claimed the visual studio plugin is a deciding factor when choosing a GPU for a massivelily parallel machine running linux instances. And I tell you it is not.

As others have said the general tools and libraries is a whole different matter but again not the plugin for a windows only IDE.

I did not say the dev. platform for nVidia is irrelevant in general.
 
Can we start a new thread called 'petty arguments with charlie' and whenever they start it can spun off over there.

It will be a win win since that way Charlie will have a thread all about him, and all ther other threads can be kept free of pointless posts.
I might just do that.
 
Don't be so defensive. You claimed the visual studio plugin is a deciding factor when choosing a GPU for a massivelily parallel machine running linux instances. And I tell you it is not.

As others have said the general tools and libraries is a whole different matter but again not the plugin for a windows only IDE.

I did not say the dev. platform for nVidia is irrelevant in general.

I never said it was a deciding factor, but it's certainly part of the decision, since as mentioned these tools were developed after the input of people working in the area.
I also said that what NVIDIA invested on CUDA and applications for the GPGPU / HPC market is far and beyond what AMD has. Thinking otherwise is very naive to say the least, especially when trying to pitch AMD as a viable option.
 
Play Crysis?

Heh, I'm sure that some techie working there may have that idea :)

chavvdarrr said:
Do we have any idea what this supercomputer would do?

Yes:

http://news.cnet.com/8301-13924_3-10364534-64.html?tag=mncol;txt

Oak Ridge's supercomputer will be used for research in energy and climate change and is expected to be 10 times more powerful than today's fastest supercomputer, according to a joint statement from Oak Ridge and Nvidia. The architecture would use both graphics processing units (GPUs) from Nvida and central processing units (CPUs), according to Nvidia. Intel and Advanced Micro Devices, among others, make the CPUs.
 
The 225W is probably an upper bound for the 6GB version. 24 GDDR5 modules probably consume a considerable bit of power themselves. It's very interesting that the first Tesla boards will have cores disabled though. Given that they're low volume parts that's a bit worrisome :)
 
That's worrying (and a tad riducule, actually) because JHH has already officially boasted some features, and those features won't be in the actual card...
Poor yields issues not solved even in the A3 rev.? :oops:
 
That's worrying (and a tad riducule, actually) because JHH has already officially boasted some features, and those features won't be in the actual card...
Poor yields issues not solved even in the A3 rev.? :oops:
the document is dated 16. november. So it concerns only A2.
 
the document is dated 16. november. So it concerns only A2.

Call me suspicious, but if you're targeting a "full" chip, then your documentation (of any date) isn't going to flaunt "castrated" numbers. The target audience of this documentation aren't going to care about which spin of metal you're currently using.
 
This is probably more for the 'signs of strain' topic, but do you reckon that the margins in the consumer high end really matter that much to Nvidia? It's a small volume sector, and Nvidia makes no secret of the fact that most of their profit comes from the professional sector - where margins are still pretty strong.

It all depends on the accounting. Here's a mental game for you. How many "professional" cards do they sell with the high end GPU. How many total cards do they sell with the high end GPU?

The answer is that without the non-professional market to pay for the R&D they would make at best a fraction of the profits on the high end. Its like this all over the place in computing. There are gravy markets with very high profits but in order to make those profits you generally have to have some other market with much lower profits paying the bulk of the R&D.
 
Last edited by a moderator:
Back
Top