NVIDIA GF100 & Friends speculation

I have to agree. Many have been talking about next monday as "the day", but I expected some leaks before that and this far, we got almost nothing...

This Sunday I think:

There are some benchmarks. Or some Powerpoint slides. As TG Daily reported earlier, Nvidia flew in a gaggle of European journalists just when CES was all done and dusted, to brief them on Fermi, expected to be formally announced on Sunday week, January 17.

http://www.tgdaily.com/hardware-brief/45461-fermi-die-means-high-nvidia-silicon-costs-ahead
 
Yeahrightsureok and it makes the GF100 somewhat over 3x times faster than a GTX295 based on those results.....:rolleyes: That silly video you're refering to uses Rys' GF100 block diagram. Don't you think that NVIDIA would have their own diagram?

Of course there is no way in hell that NVIDIA would make a video like that (NVIDIA scores in red, ATI in blue? ha ha), and I assumed all along that the score is fake as hell (wishful thinking at best), but that was all I could come up with for our new friend Richard Varga. I was hoping Mr. Varga had some answers, but so far only questions :D

I too am surprised that there are no leaks by now :(
 
Wasn't there a rumor going around that Fermi was 5870+130% (5870x2.3) in Unigine?
Perhaps this is one of those scenarios.
If seahawk is right, it would be quite bitter. Better quality fps due to the lack of AFR but perf/watt worse than 5870...ouch.
 
Agreed. Already with the HD4000 series I thought 8xMSAA not really relevant. Better do something against shader and vegetation aliasing and texture flimmering than smoothing adequate (for the most part) smooth edges even more.
 
Agreed. Already with the HD4000 series I thought 8xMSAA not really relevant. Better do something against shader and vegetation aliasing and texture flimmering than smoothing adequate (for the most part) smooth edges even more.

Now wouldn't it be nice if if there were some improvements to transparency AA? :)
 
My take: It's useless, when you either have way faster 4x msaa or can resort to much better looking supersampling (HD5k) or at least some hybrids (nvidia).

I disagree; having a healthy variety of different AA modes can't hurt, rather the contrary. If you have a scene with a reasonable amount of alpha tests for instance, you can end up with a combination of 8xMSAA and transparency AA with equivalent AA quality on transparencies as with 8xRGSS but not necessarily with the same rather big performance penalty as the latter. And yes of course the higher the portion of said alpha tests the smaller the difference between transparency supersampling and full screen SSAA.

In motion the differences between 4x and 8xMSAA are usually hard to detect, because the first does already a quite adequate job. However when things like transparencies kick in and you have a couple of trees with countless of leaves or some wires hanging in the air, the difference for those with 8x sample whatever-except pure MSAA is clearly visible.

In fact for a normal amount of transparencies in a scene I'd rather say that hybrid MSAA/SSAA or pure SSAA is overkill. If you now want to take advantage of the lower negative LOD any supersampling portion typically provides, then of course it's another chapter of it's own.

Besides when you have resources to spare on a Radeon for something like Supersampling, there's always also the edge detect custom filter option available.

Agreed. Already with the HD4000 series I thought 8xMSAA not really relevant. Better do something against shader and vegetation aliasing and texture flimmering than smoothing adequate (for the most part) smooth edges even more.

Shader aliasing I'm afraid is more of a game code issue than anything else and texture shimmering/aliasing is more than often a result of underfiltering or some developer having the funky idea that exaggerated negative LOD on some textures looks "great". When it comes to underfiltering there's nothing else really you could do then to have the GPU to filter with real trilinear wherever the application requests for trilinear.

By the way that "flimmering" is so damn Bockwurst :LOL:
 
Better quality fps due to the lack of AFR but perf/watt worse than 5870...ouch.

Personally, perf/watt only matters to me in mobile products where I care about battery life. If I'm going to be plugged in and spend money on a top of the line GPU, why should I? It would be like telling a guy who buys who buys a second ultra performance sports car that he should buy first for fuel efficiency. Now, in the mid-range market it'll matter, because people won't have elaborate cooling, cases, or power supplies, but this sounds like a card aimed at the top.

Now for Tesla, it's a different story, since power density in HPC/rackmounted systems is a huge concern.
 
Of course not... he didn't say anything about a ~360w TDP, he just said it was closer to 294w than 180w.

And what NV gives as TDP is rarely reached in real life. The diffrecne in actual consumption to a Hemlock card might be bigger, than the difference in TDP.
 
And what NV gives as TDP is rarely reached in real life. The diffrecne in actual consumption to a Hemlock card might be bigger, than the difference in TDP.

I think it's typical of all GPUs. Cypress' TDP is 188W, yet average load power consumption is around 120-140W (source. And yes, under FurMark, virtually every card exceeds their TDPs.
 
Back
Top