NVIDIA GF100 & Friends speculation

So how long to go and will we see them reviewing 1GB/2GB / overclocked + 2GB Cypress chips vs Fermi?
 
You misunderstand what I meant.

If uber tessellation (with the appropriate speed penalty) doesn't yield much visual differences, then having an higher FPS is preferable.
I understand what you meant and I repeat: it's not like GF100 will be slower with less tessellation. It's a question of what you're prepared to pay for:
1. for slower tessellation h/w in hope that no game out there will ever get anything from high tessellation levels which will be unplayable on your card no matter what added quality above "normal" mode they'll provide
2. for faster tesselation h/w where you'll be able to decide for yourself if you prefer more fps but less tessellation (since you're unable to see any difference) or more tessellation with playable fps.
In the second case you have a choice to make on a title by title basis. In the first one you're making your choice when you're paying for the card.

@the images
Extreme looks a lot nicer since they've put much more tessellation into the roof tiles. But it's more like they took out tessellation altogether for the roof tiles...

I recall from earlier wireframe images that they've put a ton more triangles into the dragon and the cobblestones as well though and the difference there isn't nearly as pronounced in these images (for the number of triangles).

Seems rather disingenuous to me unless I'm overlooking something?
Most visible difference between normal and extreme in Heaven 2.0 lies in the tessellation distance LOD. Up close the difference isn't that big (although the road stones and roofs tiles are more smooth on extreme than on normal).

Dirt 2 is a weird again. The 4890 is beating the 5870...
4890 runs DX9 renderer while all the EGs and 480 run in DX11.
 
I used to do that until I realized how much power it was consuming.

That and the gpu client kept crashing my system every once in a while.

I do it in the winter in lieu of paying the gas bill ;) Never had any system crashes with the GPU client here, 3 systems, all running GT200b now (2x GTX 275, 1x GTX 285). Was running each for weeks and even months on end folding 24x7 without issue.
 
I understand what you meant and I repeat: it's not like GF100 will be slower with less tessellation. It's a question of what you're prepared to pay for:
No you don't then. You seem to have it in mind that I'm comparing this to the ATI solutions. My comment is solely that I'm willing to trade off a lower level of tessellation in exchange for more FPS if the visual difference isn't very much. This comment holds true even if ATI doesn't have tessellation at all!


Now, you're trying to say that Nvidia hardware will still be faster with less extreme levels of tessellation and this certainly is true. In this case, the question is whether the gap remains as large since the GTX480 is faster in general anyway.


I do it in the winter in lieu of paying the gas bill ;) Never had any system crashes with the GPU client here, 3 systems, all running GT200b now (2x GTX 275, 1x GTX 285). Was running each for weeks and even months on end folding 24x7 without issue.

Running it on a GTX260 myself. Probably because I also gamed with it running sometimes by mistake that it became unstable (although I recall crashes in the client even without that).

I blame folding@home though. The ATI clients were also crash-prone on other systems I've tried them on.
 
"Done right" or "overdone right"?
Sure, every bit of extra tesselation is good, but diminishing returns are huge, too.
I mean, I know several cases from for example AvP, who have played with tesselation enabled and disabled, and don't really see any difference at all, due the fact that mostly character/enemy models are tesselated and those aliens jump at you at such high speeds
There's no indication that Evergreen is any good at tessellation.

This presentation has "red" indicating ATI, but no "green" indicating NVidia:

http://developer.amd.com/gpu_assets/Tessellation Performance.ppsx

ATI doesn't like triangles smaller than 8 pixels - seems to indicate that some combination of rasterisation and/or fragment-packing into hardware threads and/or hardware thread capacity is a problem.

If some of the features that were planned for Evergreen were cut-out in the "40nm disaster avoidance shrinkage", there might be something in there that makes or breaks tessellation.

Bilodeau's presentation:

http://developer.amd.com/gpu_assets/Direct3D 11 Tessellation Tutorial.ppsx

provides quite a nice list of things to do with tessellation - much better than most lists I've seen.

I agree arbitrary over-tessellation is pointless - but at least it doesn't kneecap NVidia. Robust is good.

Jawed
 
Running it on a GTX260 myself. Probably because I also gamed with it running sometimes by mistake that it became unstable (although I recall crashes in the client even without that).

I blame folding@home though. The ATI clients were also crash-prone on other systems I've tried them on.

Gaming with the client on isn't generally a good idea, I've had game crashes due to it which is why I generally disable the client before gaming. The ATi FAH client is garbage. Performance is a joke and it's just not as stable as the NV client.
 
Back
Top