NVIDIA GF100 & Friends speculation

They don't want a spoiler of their launch. AMD has the whole option: faster cards, lower price or both. But without information about nVidia's line-up is a bit diffecult for them. AMD did the same with the rv770. :LOL:

a price drop can be done quickly actually.

The 5870 at $400 can come down to $350 , the 5850 can go down to $250 ad the 5830 can hit $200. If ati feels the need to drop prices.

I'm sure ati already has acess to or will have acess to a gtx 480 or 470 and is messing with clocks on the thing and figuring out the best way to move foward.

A hardwaer part unless already planned is much harder than a price drop. The price drop can be done in 1-2 days
 
yeah a total of 30 seconds out of 260 secs :LOL:

of course point being that unless a game uses EXTREME amounts of tessellation (remember all the whines coming from greenies who said Uengine didn't reflect real games and was only an extreme representation) then cypress looks like its able to keep pace or possibly pass the 6 month late, 2x a big fermi.. then again wasn't the 480 supposed to be challenging the 5900 ?
 
heres the full picture.. the sections they use are kind of misleading.. heck at 2 areas ATI is actually faster (using old drivers and doubtfully the updated uengine).

Not only that but they seem to be operating under the assumption that real games will produce anywhere near the sort of workload that Unigine does in that segment. If they aren't able to do better than that in regular old DX9/10 then I have to say the new caches, higher bandwidth and doubled shader count haven't been put to good use at all. Could the entire architecture be fatally bottlenecked by the scarce texturing resources?
 
a price drop can be done quickly actually.

The 5870 at $400 can come down to $350 , the 5850 can go down to $250 ad the 5830 can hit $200. If ati feels the need to drop prices.

I'm sure ati already has acess to or will have acess to a gtx 480 or 470 and is messing with clocks on the thing and figuring out the best way to move foward.

A hardwaer part unless already planned is much harder than a price drop. The price drop can be done in 1-2 days

Yes, but look what happend after nVidia dropped the price for GT200. That was not really funny for them.
I don't think that AMD has access. Or someone must really hates nVidia as a partner...
 
of course point being that unless a game uses EXTREME amounts of tessellation (remember all the whines coming from greenies who said Uengine didn't reflect real games and was only an extreme representation) then cypress looks like its able to keep pace or possibly pass the 6 month late, 2x a big fermi.. then again wasn't the 480 supposed to be challenging the 5900 ?


it will challenge the 5970 in Dx 11 titles (might not with high levels of AA because the 5970 is going to have more bandwidth available to it). I don't remember anyone saying, at least I didn't say unigine didn't reflect real games, well to a point it doesn't since there are other things that will be going on in games, but from a graphics point of view its pretty good assessment.
 
Not only that but they seem to be operating under the assumption that real games will produce anywhere near the sort of workload that Unigine does in that segment. If they aren't able to do better than that in regular old DX9/10 then I have to say the new caches, higher bandwidth and doubled shader count haven't been put to good use at all. Could the entire architecture be fatally bottlenecked by the scarce texturing resources?


not exactly lets look at some of Kyle (hardocp graphs)

http://www.hardocp.com/image.html?image=MTE3MDk3NTY2NDA3ZUxpb0ZKVm5fMTRfM19sLmdpZg==

why is it that even though the G80 has pretty much more of everything over the x1950xt still doesn't beat it all the time in this game timed walk through?

there are parts in the graph (don't really care much about the AA difference) that are crazy fluctuations in favor of the x1950xt, like 30% to 100% faster in some parts of the game. You will get parts of a game or demo that are more favorable to one card or another, the Unigine dx11 engine was built on ATi hardware. It should be better on ATi hardware for the most part.
 
Last edited by a moderator:
not exactly lets look at some of Kyle (hardocp graphs)

opps wrong link

just a sec let me find it again

why is it that even though the G80 has pretty much more of everything over the x1900xt still doesn't beat it all the time in this game timed walk through?

16X TR SSAA vs. 2X ADAA?
 
that's why it was the wrong link, had two of them opened closed the wrong one ;)

Your question remained. ;) This is not apples to apples either.

2X MSAA/16X AF or 4X TR SSAA @ 2048x1536
vs.
no AA/ 4X HQ AF @ 2048X1536

and you wonder why the X1950XT wins on parts of the game?

Edit:
This is no example vs. the Heaven demo. You can just as easily turn this around and call the "crazy fluctuations in favor of the x1950xt" for the tessellation performance of Fermi.
 
Last edited by a moderator:
Your question remained. ;) This is not apples to apples either.

2X MSAA/16X AF or 4X TR SSAA @ 2048x1536
vs.
no AA/ 4X HQ AF @ 2048X1536

and you wonder why the X1950XT wins on parts of the game?


no look at how it fluctuates, I don't care if the geforce was using 4x aa, the hit on them is meaningless in that game some where if I remember correctly like 5% at most 10% when you have a card that in some areas of the game just beats the other by 100% well that 10% is really nothing.

Hmm no you can't turn that around on the geforce in the heaven demo since its frame rates tend to be not as diverse ;)

My point being with in a game you are going to have parts of it that favor each other IHV hardware but that engine Dx11 port was built and optimized on ATi hardware ;)


and if you want a link for the the AA performance in that game here

http://www.cdrinfo.com/Sections/Reviews/Print.aspx?ArticleId=20457
 
I was surprised that Petersen's run through is with AF and AA off, but I suppose that's as close as he can get to a tessellation-only demo. He could have run it at 1280x720, I suppose.

It should be fairly easy to compare performance on ATI, since he has the camera stationary for a while. Framerate of 45-ish is easily double (approaching triple) what we've seen as the worst case on ATI.

Jawed
 
snapshot2010.png


heres the full picture.. the sections they use are kind of misleading.. heck at 2 areas ATI is actually faster (using old drivers and doubtfully the updated uengine).
I think the point is that for that particular benchmark, nVidia has a vastly better minimum framerate: when ATI's framerate plummets, nVidia's keeps on chugging. Now, whether or not the benchmark is honest is a separate issue, but I think that this benchmark does show nVidia's new hardware in an extremely positive light.
 
At least AMD now needs to work out something with the tesselation for the 6k series to beat them in that useless demo. The worst part will be if the GTX480 barely win by 10-15% in real games. Than the 6k radeons wont need to much speed bump :rolleyes:.
A 3rd player in this biznis wouldnt hurt these days.;)

So, "DX11 doesn't matter"?
 
At least AMD now needs to work out something with the tessellation for the 6k series to beat them in that useless demo. The worst part will be if the GTX480 barely win by 10-15% in real games. Than the 6k radeons wont need to much speed bump :rolleyes:.
A 3rd player in this biznis wouldnt hurt these days.;)


Well 3rd player won't see Intel for a while :D,

I don't think the gtx 480 is only 10-15% in real games, probably will be a little bit more, since the gtx 470 looks to be around the hd5870. To me what we saw in all those white papers was actually the gtx470 benchmarks, if we look at the unigine benchmarks in the white paper it showed 1.5 to 1.8 times faster, but in this latest one, it looks to be 1.5 to 2.0 possibly a little bit more times faster in those same 60 seconds.

Also if you aren't pushing these cards what the use of even looking at them.
 
So, "DX11 doesn't matter"?

I didnt say that dx11 doesnt matter. But if u watch the graph closely than its strange that in the lesser taxing scenes (100% of today games) the card is just litle faster than 5870. It seems to be realy limited somehow.
And the graph shows GTX480 so this is not gtx470 like people stated on gf100 presentation.
 
Now, whether or not the benchmark is honest is a separate issue, but I think that this benchmark does show nVidia's new hardware in an extremely positive light.

If all you want to do is the kind of crazy tessellation that will never be in any game. Let's wait to see how it does when it's actually doing other game stuff at the same time.
 
. To me what we saw in all those white papers was actually the gtx470 benchmarks, if we look at the unigine benchmarks in the white paper it showed 1.5 to 1.8 times faster, but in this latest one, it looks to be 1.5 to 2.0 possibly a little bit more times faster in those same 60 seconds.

He runs it without AA and AF for some reasons so its hard to tell.
 
Last edited by a moderator:
Back
Top