GF100 evaluation thread

Whatddya think?

  • Yay! for both

    Votes: 13 6.5%
  • 480 roxxx, 470 is ok-ok

    Votes: 10 5.0%
  • Meh for both

    Votes: 98 49.2%
  • 480's ok, 470 suxx

    Votes: 20 10.1%
  • WTF for both

    Votes: 58 29.1%

  • Total voters
    199
  • Poll closed .
I wonder, if tessellation, especially adaptive stuff would lead to an increase in necessary inter-GPU-communication?
In AFR mode each frame is separate, so there should be no increase of traffic at all. DX11 tessellation does not reuse any data or store any additional data to GPU memory. Everything is generated on fly (including tess factors for all edges and for inner triangle).

DX11 tessellator is designed so that it doesn't need to waste GPU memory bandwidth storing the tessellated vertices first to memory and read them back later (like some earlier designs did). It's basically a layer between the vertex shader and the pixel shader. Vertices are the most coarse data, tessellated vertices are more detailed and pixels are the most detailed. Tessellator actually saves memory bandwidth as the vertex data can be more coarse (less vertex buffer reads), and the whole tessellation step reads from the post transform vertex cache and outputs to a similar cache.

The tessellation shader needs to impement 3 methods that the tessellator calls (control point method, patch method and vertex calculation method). This design clearly tells that the whole patch (usually a single untessellated triangle) is not tessellated first completely and then read, but instead the system calls the methods when it needs more data to triangulate. The tessellated mesh should always be more efficient to render compared to a equal static mesh. So it's pointless to store the last frame data, as it should be more efficient to generate the new data (than read & write huge amount of tessellated data from/to the GPU memory).
 
Indeed. Is this where the 1700MHz figure came from as well?
Not directly, but it was from NVIDIA.

BZB, in the past it's never been like that. We dig for info, verify it using multiple sources, and then sit on it until it becomes valuable, either as something to keep discussion going in the forums, pre-launch, or in the article so there's something to read that you won't get elsewhere.

Usually that information gathering step doesn't include multiple rounds of misinformation. Oh well, lessons learned for next time, and we'll still have a nice round of stuff you haven't seen or learned yet (hopefully).
 
Not directly, but it was from NVIDIA.

BZB, in the past it's never been like that. We dig for info, verify it using multiple sources, and then sit on it until it becomes valuable, either as something to keep discussion going in the forums, pre-launch, or in the article so there's something to read that you won't get elsewhere.

Usually that information gathering step doesn't include multiple rounds of misinformation. Oh well, lessons learned for next time, and we'll still have a nice round of stuff you haven't seen or learned yet (hopefully).

My bold. I think things have changed very much in that regard in the last couple of years. AMD have become very good at holding their cards close to their chests. Nvidia have done a lot of misinformation and abuse of their relationship with press. I think they have certainly co-opted people (like engineering staff) that may not previously have been involved in using the press for misinformation.

I think it was worse for Nvidia this time around because of the dragged out development of Fermi, and the way that they kept changing the product because they couldn't make what they originally wanted. Yet at the same time they kept putting out statements and leaking things via the press as a backdoor marketing channel as a spoiler against the ATI DX 11 cards.

It's just part of Nvidia's burning bridges strategy to screw over everyone as and when they need to. I don't think it's done them any favours in the long-term, especially after putting out a fairly lack-lustre competitor to AMD's cards.
 
I am pretty indifferent about the gf100. Of course, I feel the exact same way about the HD58xx series*. These two architectures' performance gains over previous generation are laughable in comparison to how many SP/CC/(whatever they're calling it these days) they've added.
It has quite obvious reasons.

RV870 - the GPU is smaller than it should be as a prevention of problems caused by crappy 40nm node (see anandtech)... other portion of performance is consumed by interpolation, which was executed by special units in R7xx, but R8xx moved it to shader core

GF100 - GTX480 has a deactivated cluster due to low yields, low core clock because of problems with 40nm manufacturing process, maybe texturing power is a bit low (missed targeted clocks?)

Anyway, majority of these reasons is a consequence of TSMC manufacturing process. And despite it GTX480 is slightly faster than GTX295 just like GTX280 was slightly faster than 9800GX2.
 
TS might not be a bottleneck, but in tessellated scenes setup/raster quite likely are.
That's the issue. Heaven 1.0 is not setup-limited, being bottlenecked by setup for 15% of the frame time. It seems possible that's on the shadow map rendering pass (is there more than one?), but I don't know.

More reasons to go deferred. Yeah, I am a sucker for deferred shading. :smile: With tessellation however, it might be more beneficial to rasterize all and then shader instead of just buffering up all the geometry.
Metro 2033 has some kind of deferred engine, so it'll be interesting to see what's happening with tessellation there.

Jawed
 
In AFR mode each frame is separate, so there should be no increase of traffic at all. DX11 tessellation does not reuse any data or store any additional data to GPU memory. Everything is generated on fly (including tess factors for all edges and for inner triangle).
My understanding is that any kind of adaptive tessellation is multi-pass, requiring the control cage to be evaluated for the variety of parameters that impact tessellation, e.g. distance, facing, silhouette.

Jawed
 
That's true, good point. Now, that said, what should we be looking for with respect to power consumption? It seems to me that total system power consumption is the important metric for the end user, right?
That would be correct thing to do when comparing systems, but when youare doing a graphics card review then you really want to understand the differences between the graphics cards.

Besides, do you know the rest of the settings on the platform that as used for the rest of the test? For instance, if it is the same platform as the rest of the performance tests, what Windows power settings was used? People oten use performance mode, which turns off a lot of the settings for Windows power management.
 
:D

thermi.jpg
 
That's clearly shopped! JHH would never wear a shirt that doesn't show his Pectoral Fortitude.

Why is GF100 so good in minimum framerates?
Maybe it's the other way around? Cypress has some unexplainable dips during some benchmarks, like at the start of a Crysis benchmark. It could be a compiler issue. Bad Company 2 also shows quirks and I'm sure we could find more if we want to.
 
actually the tests kyle did are pretty much best case. Inside a case the ambient temperature would be significantly higher in all but effectively mesh cases and the vast majority of cases provide very little sound dampening.

Sure if you put it inside my always on machine it will quite it down, but my always on machine has $100+ in mcmaster-carr aftermarket additions (love me mass loaded vinyl!) plus a significant cost in the best fans money can buy.

My issue was with the noice test, not the heat test.
 
See my signature ;)

Though it also seems like the era of TSMC's tick-tocking node/half-node has ended, which looks like a really serious problem.

Jawed

A really serious problem for whom? Its still up in the air about the when and the what and the where as far as their next GPU is concerned for ATI so I'll take a flying leap and presume you're talking about Nvidia with little option to shrink their GPU within a year to bring it down to a nominal size.
 
Heh, "only" 1920x1200 4xAA? Feel free to scrounge around for whatever settings you like. Maybe even check out those 8xAA tests while you're at it. ;) No matter where you look you're going to come back with the same conclusion - big gains over GT200 and strong DX11 performance.

[edit] Of the 9 settings used in that review you picked the only one where Cypress shows a bigger gain? Lolz, yes surely I'm cherry picking when 8 of 9 settings are in line with my comments. Sigh....
I just showed you an example of cherry picking, I can also cherry pick another review for my averages and we would arrive at a smaller jump for GF100. :LOL: Funny, you were the one throwing the premature cherry picking accusations around. Also, I'm not the one in minority here who thinks the both 480 & 470 rock my socks off. :LOL:

Just to be clear, the architecture looks good. As products they dont.
 
I just showed you an example of cherry picking, I can also cherry pick another review for my averages and we would arrive at a smaller jump for GF100. Funny, you were the one throwing the premature cherry picking accusations around.

Ah, so the entire computerbase review is a cherry picked sample. You got me - that's exactly why I chose it and not because they present an extremely easy interface for comparing cards. If you're so inclined please provide a list of sites that you approve for use in these discussions and I'll be sure to use those instead.

Also, I'm not the one in minority here who thinks the both 480 & 470 rock my socks off.

Heh, you keep repeating this as if popular opinion in this thread should somehow get in the way of hard numbers. You wouldn't be trying to distract from the real issue now would you? :)
 
Heh, you keep repeating this as if popular opinion in this thread should somehow get in the way of hard numbers. You wouldn't be trying to distract from the real issue now would you? :)

Now now, he just explained how he uses cherry picking to get some hard numbers. And I agree with him that it is important that the numbers you pick fit in with the general mood here.
 
Now now, he just explained how he uses cherry picking to get some hard numbers. And I agree with him that it is important that the numbers you pick fit in with the general mood here.

Why would anyone want to do that? Anti-establishment is way more fun! :LOL:
 
After having read through many reviews and giving it some time, my final take on Fermi is this:

There's not much with Fermi worth giving thumbs up over. You can say "oh but the performance is there!" but that comes at a price. The card punishes the users to extract the performance. Severe heat build up, huge power drains and a shrieking fan are what one must readily endure to experience the performance.

Nvidia disregarded praticality for a benchmark victory. Not a great one at that consdering this is the least they could do against the competition which has had their products out on the market for 6+ months. All this and the still have the nerve to charge 25% more than the competition. It's just a poor offering all around.

Perhaps future iterations of Fermi will be more reasonable from a price, performace and practical usage standpoint. This version however, should have stayed in the labs....
 
After having read through many reviews and giving it some time, my final take on Fermi is this:

There's not much with Fermi worth giving thumbs up over. You can say "oh but the performance is there!" but that comes at a price. The card punishes the users to extract the performance. Severe heat build up, huge power drains and a shrieking fan are what one must readily endure to experience the performance.

Nvidia disregarded praticality for a benchmark victory. Not a great one at that consdering this is the least they could do against the competition which has had their products out on the market for 6+ months. All this and the still have the nerve to charge 25% more than the competition. It's just a poor offering all around.

Perhaps future iterations of Fermi will be more reasonable from a price, performace and practical usage standpoint. This version however, should have stayed in the labs....

Exactly what everyone without bias will see it. Guess now we have a rough estimate of the percentage of users with nVidia bias (on B3D) from the poll.

The fact that they still show up here arguing is quite telling, at least others have remainded silent, as they should.
 
Exactly what everyone without bias will see it. Guess now we have a rough estimate of the percentage of users with nVidia bias (on B3D) from the poll.

Yep, we are an environmentally conscious bunch over here. Agreed on the fan noise - it's an annoying hurdle to get over and will kill it for some people. Otherwise, I'm not getting the price complaints - seems to be perfectly in line with the performance advantage which is a miracle for a high-end part. What would be a fair price for those cards?
 
Exactly what everyone without bias will see it. Guess now we have a rough estimate of the percentage of users with nVidia bias (on B3D) from the poll.

Does this mean that we also have a rough estimate of the percentage of users with an ATI bias at B3D?

My take on Fermi is that the architecture is interesting. The implementation leaves quite a lot to be desired - whether that's NVIDIAs fault or TSMCs depends on what colour glasses you wear. I fear we'll never get an account of the real root causes of Fermis issue that we can read and not suspect it's being twisted one way or another by the author.

As for staying in the lab - it's not the first graphics card launched that falls into that category.
 
Back
Top