NVIDIA Fermi: Architecture discussion

Can anybody give a quarter sensible reason how putting a DVI port on a tesla will help reduce costs for supercomputers when these babies cost ~$3K a pop. And it's not like they undercut Quadros on price either


Nvidia's taking a frown and turning it into a smile! They've got so little to talk about, they are pimping "Free! DVI! Port!". Who wants one of those new fangled displayport things? Not us, and AMD is simply going against Science and Nature by including it on their consumer cards.

Next on the list of special Nvidia features will be:

Free! GPU! Cooler!
Free! Endplate! Bracket!
Free! PCIe! Connector!

Seriously, I think Nvidia have some kind of BS budget that they have to use up every month or they don't get their bonuses.
 
Can anybody give a quarter sensible reason how putting a DVI port on a tesla will help reduce costs for supercomputers when these babies cost ~$3K a pop. And it's not like they undercut Quadros on price either

Well, mainly for not needing to have some other output device on any Tesla-equipped workstation/server I guess. Saves an expansion slot and some power and money.
 
I looked at that article. And I'm no Charlie fan. But did he actually say that? I didnt see it. It'd be pretty obsurd.

[conspiracy hat on] One scenario could be that someone will lose less money by paying TSMC to spike yields on the whole process than they would by their competitor eating them alive in the market.[conspiracy hat off] I am not saying this is happening, nor am I saying it is only affecting ATI, I am just saying that something is really really wrong.

Glenn Beck, is that you?
 
First of all, that particular SKU is a workstation card, not a server part. Most Teslas are not video display devices, so you can't run any output through them (even 2D during boot up).

So for your nice fancy workstation with Teslas in it, you need some sort of graphics card on top of that for display. If you want good performance, the cards need to be identical, so that means you're stuck buying a quadro...which makes no sense since it's 2X more expensive.

So the whole reason is that it makes life easier for workstation users, and frankly a DVI port is cheap anyway compared to the pricetag.

David
 
Can anybody give a quarter sensible reason how putting a DVI port on a tesla will help reduce costs for supercomputers when these babies cost ~$3K a pop. And it's not like they undercut Quadros on price either

So one need not buy another separate display adapter, if necessary?
 
First of all, that particular SKU is a workstation card, not a server part. Most Teslas are not video display devices, so you can't run any output through them (even 2D during boot up).

David

Indeed,
take this picture of a packed up server:
http://www.heise.de/imgs/09/4/4/2/1/5/7/Tesla_S2070__Top-169e817ea9223a16.jpg

Unfortunately, this specific systems seems to require both 6 and 8 pins power, but you can see the two DVI ports and two HDMI ports still unused on the PCB. The fan connector is still on the PCB.
About the SLI connector, you can actually see the traces of the second SLy connector still present on the board.
 
While that is true there are still people who say it isn't working, doesn't exist, cannot run code yet. Thus it addresses part a) as mentioned in your post. An elephant is big, but cannot run DX11 benchmarks. :)

Well, there are those people yes. But how will this picture convince them? I suggest nVidia would have to put cards on shelves to change their minds so again, what was the point of this picture and nVidia actively seeding it to chrisray and other community leaders/websites?

Maybe there's an hidden message and you need to get your green lantern decoder ring or something (see what I did there?) :p
 
Well, there are those people yes. But how will this picture convince them?

Maybe because those people are generally weak-minded to begin with? You should see some of the responses out there - people are actually hoping that it's fake :LOL: Disdain for Nvidia is really becoming a phenomenon.
 
Maybe because those people are generally weak-minded to begin with? You should see some of the responses out there - people are actually hoping that it's fake :LOL: Disdain for Nvidia is really becoming a phenomenon.

That's actually my point: if you have a population who aren't conspiracy theorists this picture has no purpose. If you have another population that, no matter what, won't believe you then this picture has no purpose either. :shrug:
 
What's wrong with that given the target market? There are people who are currently using Tesla's weak DP throughput today who would be interested in the comparison.

Are there? Going by David's numbers, Tesla is roughly on par with Nehalem in DP GFLOPS per mm² or per watt. Given that, it hardly makes any sense to go through the hassle of using Tesla, CUDA, etc., instead of a CPU like Nehalem or Istanbul if you need DP.
 
Well, there are those people yes. But how will this picture convince them? I suggest nVidia would have to put cards on shelves to change their minds so again, what was the point of this picture and nVidia actively seeding it to chrisray and other community leaders/websites?

Maybe there's an hidden message and you need to get your green lantern decoder ring or something (see what I did there?) :p

Nvidia knew the card was going to be seen soon anyway. So may as well have a little fun with it. Give users something to talk about. Spread a little hype. Get people talking about it.

Regardless of the results. People are definately talking about it.
 
Nvidia knew the card was going to be seen soon anyway. So may as well have a little fun with it. Give users something to talk about. Spread a little hype. Get people talking about it.

Regardless of the results. People are definately talking about it.

So was that the point of the mockup?:rolleyes:
 
Nope. I don't honestly think Nvidia cares about appeasing the conspiracy theorists.

It's just a coincidence that it's also the day 5890 reviews came out... :)

Oh I'm under no illusion that this is not coincidence. Like I said. Nvidia have succeeded in the thing they wanted most. People to talk about it.
 
Oh I'm under no illusion that this is not coincidence. Like I said. Nvidia have succeeded in the thing they wanted most. People to talk about it.

But that's only a good thing if they are talking about a great new product they can't wait to get hold of. If people are talking about how it's another lame fake put out to lie and spin to people in the face of an actual real product launch because that's all Nvidia have against AMD... that's probably not so good...

You yourself said you're not allowed to talk about it, and now you're posting photos? From the company that "doesn't comment about unreleased products"? Smacks of desperation and the six months prior to the NV30 launch.
 
I'm allowed to talk about anything thats public and not covered by NDA. If I had not posted it. Someone else would have. I also have no issue with posting something new that I am excited about.

If thats not enough for you. Then I don't really care. :p

They sure like creating the conspiracies and drama though!

Nvidia doesn't have to do anything to create conspiracies and drama about them.
 
It's pretty important for FLOPS intensive work (e.g. matrix multiply), and NVidia is clearly trying to focus on that sort of thing with Fermi. Thing like GPU folding have only been faster on NVidia hardware because ATI had some limited features (e.g. write-private, read-public shared memory) and bad language support. Now they have OpenCL and better hardware, and as we saw in that "paper dragon" PDF by ATI, RV870 outdoes Fermi in some GPGPU features/specs.

I do think the top end Fermi will be faster in games simply because it will have 128 tex units, 50% more BW, and may have faster triangle setup if they find a way to parallelize that in CUDA code. But at any given BOM, RV870 will trounce Fermi in gaming and probably GPGPU, too.

Of course market forces are going to make you pay for performance either way, so just like NVidia was selling 448-bit boards with >400 mm2 chips for under $150 while ATI used a 128-bit 5770 at the same price point, so too will Fermi be at least somewhat compelling for the end user.

Let me get this striaght, ATI feature wise had better "paper" specs than Nvidia in the R770 vs G200 round, but yet Nvidia had better real world performance. ATI has added a little, doubled everything else and NOW they will be faster in Gaming overall vs Fermi. Interesting, concidering Fermi is pretty much Double and completely new design vs G200.

And I really wish I knew why people felt OCL was/is going to save ATI in GPGPU computing, at best I see 30% improvement for them in GPGPU tasks. Hey atleast ATI can always say, look we own Synthitic GPGPU benching, pay no attention to those real world numbers over there, just look here.
 
Back
Top