NVIDIA Fermi: Architecture discussion

Because you want the support? They tell eveybody that Quadro and Tesla business is a solution market and that's the reason why they don't see amd as a competitor.
And for the margin: They spoke about 60% for Tesla.

And you think Nvidia isn't going to support someone doing HPC? Sure that's great to their reputation, piss off someone in a small close nit community and see how much product you sell in the future.
 
If that is true then they aren't competing against commodity CPUs, they are competing against commodity GPUs, which I might remind you have even LOWER margins than cpus. In the HPC space you generally aren't competing against the other guys expensive parts, but against your cheapest parts.

Hmmm, I'm not following. The delta between the cheapest and most expensive CPUs isn't nearly as wide as between Tesla and Geforce in price or features. For example, you can't buy a 1U rack of Geforces with 6GB RAM each. Tesla isn't competing with Geforce, it's competing with existing CPU based HPC solutions.

AKA why would I pay nvidia 2K for something they are selling for $300 when its them that want to push it?

Simple. The $300 part doesn't offer what the $2000 part does. According to your logic there should never have been a single Quadro sold.
 
Hmmm, I'm not following. The delta between the cheapest and most expensive CPUs isn't nearly as wide as between Tesla and Geforce in price or features. For example, you can't buy a 1U rack of Geforces with 6GB RAM each. Tesla isn't competing with Geforce, it's competing with existing CPU based HPC solutions.

And Opteron and Xeon aren't competing with their desktop counterparts....

Any exec that believes they aren't competing is working for a company that will fail.


Simple. The $300 part doesn't offer what the $2000 part does. According to your logic there should never have been a single Quadro sold.

The $2000 part doesn't off anything that the $300 part doesn't for the HPC market.
 
there's an actual big difference between an i7 920 and an equivalent Xeon, too.

Really? Name one difference between the i7 920 and W3520 :)

The $2000 part doesn't off anything that the $300 part doesn't for the HPC market.

So form factor, ECC, memory capacity, support level etc don't count as "anything" ? Any serious HPC firm that buys up a bunch of XFX Geforces and expects Tesla level of support from Nvidia would be out of their mind.
 
If that is true then they aren't competing against commodity CPUs, they are competing against commodity GPUs, which I might remind you have even LOWER margins than cpus. In the HPC space you generally aren't competing against the other guys expensive parts, but against your cheapest parts.

AKA why would I pay nvidia 2K for something they are selling for $300 when its them that want to push it?


Hmm ho so? What are HPC's used for? answer that question and that you will see Tesla addressed those needs but the Geforce won't.
 
Really? Name one difference between the i7 920 and W3520 :)

oops, I meant a big price difference :).

it's about ECC support[strike], and a QPI link[/strike]. ECC is artificial, coherent link maybe slightly less so.


-----
duh, you've well choosen that xeon model, I'm not sure about its memory support either. I was more thinking of the X5550!
 
Last edited by a moderator:
So form factor, ECC, memory capacity, support level etc don't count as "anything" ? Any serious HPC firm that buys up a bunch of XFX Geforces and expects Tesla level of support from Nvidia would be out of their mind.

ECC is effectively done in software. Memory capacity is just a technicality. And support level? Ha! They'll get the same level of support regardless or Nvidia won't be in the HPC market long. People aren't going to pay Nvidia 5x the price for a bit of extra memory when they can just design the algorithms to work within the constraints. While its not much of an issue if you are buying 1 or 2, its a major issue when you are buying in multiples of 1k. If you don't think 2-3 MILLION in upfront costs is a strong motivator, you don't understand HPC.
 
Hmm ho so? What are HPC's used for? answer that question and that you will see Tesla addressed those needs but the Geforce won't.

Crunching numbers. Point being? HPC isn't a rich market. Never has been. HPC users tend to be the most informed and hardest negotiators in the industry. The extra money doesn't provide enough benefit to justify the 5-10x cost increase, esp in volume that most facilities purchase in.

I think people are forgetting that the types of workloads that GPU work well on are fundamentally embarrassingly parallel and as such, cheaper = better.
 
On newegg they're $20 apart :) Desktop parts have the same QPI link too. ECC seems like the only difference.

Yep, that's pretty much what the market is willing to pay, and the DP parts provide more benefits over the normal parts than the Tesla does over the non-tesla.

And I'm sure AMD/Intel would charge more for the DP parts if the market would let them.
 
And Opteron and Xeon aren't competing with their desktop counterparts....

Any exec that believes they aren't competing is working for a company that will fail.

They really aren't competing though. Those desktop parts only scale to (currently) 4 cores / box.

When your server or workstation needs grow beyond that, you're going to be biting the bullet on the Opteron or Xeon premium.

Any exec that trades compute density and ECC for hobbying around with desktop parts is not seeing the proper picture.
 
They really aren't competing though. Those desktop parts only scale to (currently) 4 cores / box.

When your server or workstation needs grow beyond that, you're going to be biting the bullet on the Opteron or Xeon premium.

Any exec that trades compute density and ECC for hobbying around with desktop parts is not seeing the proper picture.

Funny, the vast majority of HPC installations are done with 1P boxes, generally using desktop parts. Same actually for the majority of server farms in general. There is a reason why the premium for DP parts vs UP parts is so low.

Large numbers of companies have either specifically designed or redesigned their software in order to use large pools of 1P boxes because of the lowered costs and increased flexibility. The areas that require MP systems are shrinking every day.

So why pay a premium for a "server" GPU over the cheapest or best performance/$ you can get. You are going to redesign your application for gpus anyways and your application is obviously embarrassingly parallel or you wouldn't be looking at GPUs.
 
Doesn't that rather depend on what you define as HPC? 2P/8C per 1U is the sweet-spot for the HPC I see around me.
 
Funny, the vast majority of HPC installations are done with 1P boxes, generally using desktop parts. Same actually for the majority of server farms in general. There is a reason why the premium for DP parts vs UP parts is so low.

Heh, which is why the 2.66Ghz Bloomfield Xeon W3520 goes for $309 on Newegg, while the equivalent DP-capable X5550 is $999. Ever wonder why the modest 2.6Ghz Opteron 8435 sells for upwards of $2600?

The premium is actually huge. I'm not sure which CPUs you have been looking at.

As for server farms, particularly HPC, they are all about the lowest amount of cooling and power possible for rooms full of rackmounted equipment. 1P boxes using desktop parts? Maybe.. for a handful of poor university student projects.
 
Or maybe your "knowledge" is rooted in the old school way of doing things where there is strong competition between the CPU guys. I find it hard to believe that a company could offer considerably higher efficiency hardware and not benefit from that advantage.

Of course Nvidia will offer attractive prices to get a foot in the door but after a while the hardware will speak for itself. Under which scenario do you think GPU ASPs will be constrained in the HPC market if they deliver on the promised efficiency gains over commodity CPUs?

I think what Aaron is saying is that GPUs will make money in HPCs the same way that CPUs do. If they are X times faster, they could make X times as much money. I think that is the case here, GPUs will make many times the multiple that AMD and Intel net in HPC.

-Charlie
 
People aren't going to pay Nvidia 5x the price for a bit of extra memory when they can just design the algorithms to work within the constraints. While its not much of an issue if you are buying 1 or 2, its a major issue when you are buying in multiples of 1k. If you don't think 2-3 MILLION in upfront costs is a strong motivator, you don't understand HPC.

Again, Tesla competition is not Geforce. You keep making this irrelevant comparison and I don't know why. The comparison is between Tesla and current CPU based setups. In certain workloads the perf/$ of Tesla will be much higher and that's what Nvidia is targeting. So in fact they are banking on the cheapness of the HPC folks.
 
Well, we're at the stage now where even if I printed what I knew, it'd come across the wrong way to an excitable few, because Charlie's ruined being able to be (even cautiously) optimistic about the company.

Plus, they're just opening up with both barrels and blowing both feet off on a regular basis with what they put on Twitter and Facebook (please make it stop) and the like, Intel's Insides (that's actually vomit-worthy), and various careless whispers to careless people.

CES will be a laugh!

I don't know, I was cautiously optimistic about the A3 steppings coming out, and the release dates.

-Charlie
 
Crunching numbers. Point being? HPC isn't a rich market. Never has been. HPC users tend to be the most informed and hardest negotiators in the industry. The extra money doesn't provide enough benefit to justify the 5-10x cost increase, esp in volume that most facilities purchase in.

I think people are forgetting that the types of workloads that GPU work well on are fundamentally embarrassingly parallel and as such, cheaper = better.

And don't forget, a large portion of the HPC space has an army of slaves, sometimes called grad and undergrad students, to code things that would never make sense in a commercial setting. $10K can pay for a lot of coding time in an academic environment. :)

-Charlie
 
Back
Top