NVIDIA GF100 & Friends speculation

No but these things don't flip on their head overnight. We will see the start of harder times for nV in this Q1 financial report.

We heard that before, in the following quarters of the RV770 launch. Nothing really materialized...

The situation is obviously different now, since NVIDIA is late, but it's also different because NVIDIA has Tegra and a greater focus on the HPC market.

jimbo75 said:
While it's true TSMC appear to be doing everything they can to keep nVidia in the race, not all of ATI's work will be lost. From what I see, it's just keeping nVidia in touch until the inevitable.

Even if we assume both companies are stuck with what they have, we know ATI can price Fermi into making a loss. No matter how you look at it, nVidia are making bigger chips that just aren't fast enough. If they were making smaller, slower chips that wouldn't be so bad but they aren't, they are making much bigger chips that aren't fast enough and that is a bad situation to be in.

Which are also targeted for other markets, where AMD has practically nothing ?

And what's this "inevitable" you speak of ?

Also, do you see one game and declare it to be a failure ? You know, the HD 2900 XT also beat the 8800 GTX on Call of Juarez at first and that didn't make it any better than it really was. Wait for the reviews date and then declare it for what it is.
 
Developers are customers too Dave. I'm surprised you're taking the advances in programmability so lightly. A feature isn't defined by what the end-user sees in the end, if that was the case not many features have been added to GPUs since inception. After all we still just get an image on our monitors at the end of the day.

Good point, just remember if a feature is too expensive then it will go. Everything will come down to the cost.
If Fermi is not going to make money then a new thing will be around the corner in no time and it could have different or even less features because NV will try to make it profitable.
 
And to think some websites were not very bowled over by the 5870 raw performance when that came out.

I think it's going to be rather boring on Friday sadly... :(
 
But that wasn't your initial argument, was it?
Yeah , I admit that I didn't express myself clearly , I knew the site uses the conventional names , so when I saw the fake picture , it didn't come to my mind that they used the version number , I thought it was Cat 8.7 , not version 8.7 .

Of course that is some ignorance on my part .. not knowing to check for version number vs conventional naming , I think I shouldn't forget to do that again .:oops:
 
And to think some websites were not very bowled over by the 5870 raw performance when that came out.

And for good reason. At least they arrived "on time".

dizietsma said:
I think it's going to be rather boring on Friday sadly... :(

If the performance figures we've seen, mostly on synthetic benchmarks, are a good indication, I'm afraid it will be. The performance delta over GT200, does seem to be in line with Cypress vs RV770, which is disappointing. At least I hope to see some higher minimum framerates.
 
"On time" with a very immature Tessellation implementation. A big trade-off for "time-to-market".

Well, immature or not, it was the first one available (since I don't consider the tessellation unit since R600, of any use at all) and that much deserves credit.
 
I don't give a f about GF100 cost to produce, I don't care if it's bigger or smaller than Cypress. What matters is performance, features, prices for consumers and profits for NVIDIA. Whole this "oh noes it's bigger well it must cost more to produce thus it's dooomed" situation is really lame. Knife is cheaper to produce than a gun so does it mean that guns are pointless, let's all buy knifes instead? (Sorry for the comparision.)
Whoa, there! I never said Nvidia is doomed. If ATi could survive R600/3870, then Nvidia can surely weather GT200/Fermi. I'm simply of the opinion that whatever additional profits Nvidia will gain from the HPC functions of Fermi will not be enough to offset its increased cost to produce due to its size and complexity.
 
"On time" with a very immature Tessellation implementation. A big trade-off for "time-to-market".

Why do you say immature? What indication was there back in September of it being immature? That it's different and less efficient than the competition's is a completely different thing, imo.

And for good reason. At least they arrived "on time".

I think he was being sarcastic.
 
So? Guessing anything from die sizes is absolutely pointless. And GF100 has all the HPC market to itself. How can you be sure that NV won't make more money off GF100 from gaming+HPC markets than AMD will from gaming only?


You don't know anything about that number yet. NV was counting on something with all the GF100 compute capabilities. If they would've thought that "number is going to be fractionally small" then they wouldn't do it, would they? It's a gamble, sure, but what if it'll pay off?
I don't give a f about GF100 cost to produce, I don't care if it's bigger or smaller than Cypress.
Well...., you should if only because it is in YOUR best interest as a consumer to have parts competitive in perf/$ from both sides
 
Not at all. I don't think luck had anything to do with it. GT200 (which was released 2 years ago, not 3), wasn't exactly a good design, but it didn't stop NVIDIA from keeping the performance crown with it. And it wasn't a financial disaster as some like to rave about constantly, otherwise that would've been seen for some time now. NVIDIA struck gold with their G80 deisng and especially G92, which took ATI almost 3 years to catch up. As for Fermi, well new architectures tend to be very hard to start (just look at what ATI had to deal with R600) and this is just another example.

nVidia could command silly prices for the 8800gtx/ultra while the 2900 floundered.

The 3870 changed that almost immediately. It was smaller and could be X2'd to give ATI the performance crown back, albeit for a couple of months only.

So next up, rv770 and g200 were as close as they could be. The 9800gx2 firmly put the 3870x2 in it's place, so why didn't the gtx295 do that to the 4870x2?

And now with Fermi, we have a situation where nVidia has given up any hope of getting the halo part back. But that's ok, because nVidia has the fastest single gpu? Even though it's 50% bigger, hotter, hungrier?

How does a company go from being almost twice as fast 8800 ultra vs 2900 as their competitor to this farcical mess with Fermi in 3 years?

It's not a change of strategy that is needed, it's sweeping changes from top to bottom of the company. And yes, nVidia are still making money, barely, but that's mostly due to ATI not competing in nVidia's really profitable businesses - yet.
 
The situation is obviously different now, since NVIDIA is late, but it's also different because NVIDIA has Tegra and a greater focus on the HPC market.
That's what I am concerned about as a consumer. NV's hpc focus.

Which are also targeted for other markets, where AMD has practically nothing ?
Longer term, it is actually nv that has nothing in HPC market.

And what's this "inevitable" you speak of ?
In my case, I'd say the HPC market NV is so hot on is a dead duck in the water for them if they can't arrange for permission from the powers that be to touch the x86 socket.

The window of opportunity for NV in hpc market is limited to the time multi-socket fusion chips arrive. My guess 2012 is when they arrive, 2013 is when tesla's growth begins to sublimate away. NV knows this is *very* short and is hence shot for almost all the features HPC crowd asks for.
 
Pffft, I can think of quite a few folks who will partying heartily :D
dance.gif
snoopy.gif
dance.gif
 
Why do you say immature? What indication was there back in September of it being immature? That it's different and less efficient than the competition's is a completely different thing, imo.

There was no indication because there were no games and no other implementation from an other company. But after Stalker, Metro2033 and previews of the GF100 architetcture i think "immature" is a good definition of the implementation.
 
Longer term, it is actually nv that has nothing in HPC market.
Uh, why?

In my case, I'd say the HPC market NV is so hot on is a dead duck in the water for them if they can't arrange for permission from the powers that be to touch the x86 socket.
Why should nVidia have any interest in that? Their parts are basically billed as co-processors, and ones that can easily be upgraded to boot. There's been quite a lot of interest in building clusters with nVidia hardware among people I work with (this is for the Planck satellite, and we have a number of supercomputers dotted around Europe and the US for data crunching).

The window of opportunity for NV in hpc market is limited to the time multi-socket fusion chips arrive. My guess 2012 is when they arrive, 2013 is when tesla's growth begins to sublimate away. NV knows this is *very* short and is hence shot for almost all the features HPC crowd asks for.
Those aren't going to be anywhere near the performance of discrete graphics for a long time to come.
 
Back
Top