The site uses the conventional driver names , not the version name
But that wasn't your initial argument, was it?
The site uses the conventional driver names , not the version name
No but these things don't flip on their head overnight. We will see the start of harder times for nV in this Q1 financial report.
jimbo75 said:While it's true TSMC appear to be doing everything they can to keep nVidia in the race, not all of ATI's work will be lost. From what I see, it's just keeping nVidia in touch until the inevitable.
Even if we assume both companies are stuck with what they have, we know ATI can price Fermi into making a loss. No matter how you look at it, nVidia are making bigger chips that just aren't fast enough. If they were making smaller, slower chips that wouldn't be so bad but they aren't, they are making much bigger chips that aren't fast enough and that is a bad situation to be in.
Developers are customers too Dave. I'm surprised you're taking the advances in programmability so lightly. A feature isn't defined by what the end-user sees in the end, if that was the case not many features have been added to GPUs since inception. After all we still just get an image on our monitors at the end of the day.
Yeah , I admit that I didn't express myself clearly , I knew the site uses the conventional names , so when I saw the fake picture , it didn't come to my mind that they used the version number , I thought it was Cat 8.7 , not version 8.7 .But that wasn't your initial argument, was it?
And to think some websites were not very bowled over by the 5870 raw performance when that came out.
dizietsma said:I think it's going to be rather boring on Friday sadly...
And for good reason. At least they arrived "on time".
"On time" with a very immature Tessellation implementation. A big trade-off for "time-to-market".
Billion, perhaps?
Whoa, there! I never said Nvidia is doomed. If ATi could survive R600/3870, then Nvidia can surely weather GT200/Fermi. I'm simply of the opinion that whatever additional profits Nvidia will gain from the HPC functions of Fermi will not be enough to offset its increased cost to produce due to its size and complexity.I don't give a f about GF100 cost to produce, I don't care if it's bigger or smaller than Cypress. What matters is performance, features, prices for consumers and profits for NVIDIA. Whole this "oh noes it's bigger well it must cost more to produce thus it's dooomed" situation is really lame. Knife is cheaper to produce than a gun so does it mean that guns are pointless, let's all buy knifes instead? (Sorry for the comparision.)
"On time" with a very immature Tessellation implementation. A big trade-off for "time-to-market".
And for good reason. At least they arrived "on time".
So? Guessing anything from die sizes is absolutely pointless. And GF100 has all the HPC market to itself. How can you be sure that NV won't make more money off GF100 from gaming+HPC markets than AMD will from gaming only?
You don't know anything about that number yet. NV was counting on something with all the GF100 compute capabilities. If they would've thought that "number is going to be fractionally small" then they wouldn't do it, would they? It's a gamble, sure, but what if it'll pay off?
Well...., you should if only because it is in YOUR best interest as a consumer to have parts competitive in perf/$ from both sidesI don't give a f about GF100 cost to produce, I don't care if it's bigger or smaller than Cypress.
Not at all. I don't think luck had anything to do with it. GT200 (which was released 2 years ago, not 3), wasn't exactly a good design, but it didn't stop NVIDIA from keeping the performance crown with it. And it wasn't a financial disaster as some like to rave about constantly, otherwise that would've been seen for some time now. NVIDIA struck gold with their G80 deisng and especially G92, which took ATI almost 3 years to catch up. As for Fermi, well new architectures tend to be very hard to start (just look at what ATI had to deal with R600) and this is just another example.
I think it's going to be rather boring on Friday sadly...
That's what I am concerned about as a consumer. NV's hpc focus.The situation is obviously different now, since NVIDIA is late, but it's also different because NVIDIA has Tegra and a greater focus on the HPC market.
Longer term, it is actually nv that has nothing in HPC market.Which are also targeted for other markets, where AMD has practically nothing ?
In my case, I'd say the HPC market NV is so hot on is a dead duck in the water for them if they can't arrange for permission from the powers that be to touch the x86 socket.And what's this "inevitable" you speak of ?
Pffft, I can think of quite a few folks who will partying heartily
Why do you say immature? What indication was there back in September of it being immature? That it's different and less efficient than the competition's is a completely different thing, imo.
Uh, why?Longer term, it is actually nv that has nothing in HPC market.
Why should nVidia have any interest in that? Their parts are basically billed as co-processors, and ones that can easily be upgraded to boot. There's been quite a lot of interest in building clusters with nVidia hardware among people I work with (this is for the Planck satellite, and we have a number of supercomputers dotted around Europe and the US for data crunching).In my case, I'd say the HPC market NV is so hot on is a dead duck in the water for them if they can't arrange for permission from the powers that be to touch the x86 socket.
Those aren't going to be anywhere near the performance of discrete graphics for a long time to come.The window of opportunity for NV in hpc market is limited to the time multi-socket fusion chips arrive. My guess 2012 is when they arrive, 2013 is when tesla's growth begins to sublimate away. NV knows this is *very* short and is hence shot for almost all the features HPC crowd asks for.
I think he was being sarcastic.
Pffft, I can think of quite a few folks who will partying heartily