NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
But GTX260 SLI will be $800, and could well drop to $700 before long (according to some predictions). That's a long way below $1300. :)

i just settled my accident settlement today; Want to talk about *timing* ? My check's ETA is the 18th!!! Unfortunately it is my own insurance company that is settling with me, as the motorist that hit me from behind [last October] at the red light was UNinsured .. or i would be getting TRI-SLi.

However, it was twice what the adjuster offered me on Wednesday [and there is no lawyer with his hand in my pocket, nor the tax man] and near the limit of the value of my case, i went for it and look forward to paying off ALL of my high-interest credit - and a GTX280 is in the works; a "treat" for me for sure [you know "pain & suffering"]. i will be more than glad to give my first impressions of GTX280 also. IF i can get it at MSRP.

What does your crystal ball say? Good availability for the GTX at launch day?

EDIT: intel demos ET:QW with RT .. 13 FPS and the water looks nice
[very slightly related; as i think CUDA has to beat this kind of CPU-lopsided PR with Nvidia's own GPU-lopsided PR, i think. it appears to me that they are saying GTX280 will not need such an impressive CPU [i hope as i am keeping my e4300 @ 3,25Ghz for a bit longer]

http://www.tgdaily.com/html_tmp/content-view-37925-113.html
 
Last edited by a moderator:
assassin's creed in reverse?

Maybe, but its abit different when you compare the two cases. AC did have a DX10.1 path that did provide performance benefits to ATi cards (both cards i.e 8800GT/HD3870 provided enough performance to fully play AC at decent settings) but it was buggy and showed rendering problems in some situations. Ubi took the easiest method (as devs) of fixing this by removing DX10.1 completely.

Where as in COJ case, there was no adequate reason behind the changes. Infact the inital benchmark saw nVIDIA cards performing a little better than its AMD counterparts. Interestingly, AMD bought the COJ benchmark, including its publishing rights and what not. Soon after came a patch that enabled whats listed in guru3d, which seriously crippled nVIDIA cards as seen by most reviews out there today. IQ comparisons were done but the supposed improved IQ according to techland was so minuscule it made no sense to disable certain features such as hardware AA path being completely removed. There was other observations made such as no AF being applied on R600 hardware.
 
Maybe, but its abit different when you compare the two cases. AC did have a DX10.1 path that did provide performance benefits to ATi cards (both cards i.e 8800GT/HD3870 provided enough performance to fully play AC at decent settings) but it was buggy and showed rendering problems in some situations. Ubi took the easiest method (as devs) of fixing this by removing DX10.1 completely.

Showed rendering errors in Nvidia situations, there was no reason to remove the patch for Ati cards other than pressures from a certain vendor.

Where as in COJ case, there was no adequate reason behind the changes. Infact the inital benchmark saw nVIDIA cards performing a little better than its AMD counterparts. Interestingly, AMD bought the COJ benchmark, including its publishing rights and what not. Soon after came a patch that enabled whats listed in guru3d, which seriously crippled nVIDIA cards as seen by most reviews out there today. IQ comparisons were done but the supposed improved IQ according to techland was so minuscule it made no sense to disable certain features such as hardware AA path being completely removed. There was other observations made such as no AF being applied on R600 hardware.

What did the developers have to say about it?

<edit> nm this is a discussion for a whole other thread and shouldn't be done here.
 
I still think COJ is a very biased benchmark that shouldn't be trusted at all.
As opposed to all the remaining titles with benchmark modes that are part of the TWIMTBP program? :rolleyes:
Maybe ATI should be issueing statements about each of them too, so that they can get people to doubt their validity?
 
As opposed to all the remaining titles with benchmark modes that are part of the TWIMTBP program? :rolleyes:
Maybe ATI should be issueing statements about each of them too, so that they can get people to doubt their validity?

If they have hard evidence then why not?
 
Whoa, that explains a lot, no wonder NVIDIA cards are so far behind in that benchmark.
Actually, the developer responded nVidia and made them look like idiots. The basis of nVidia complaining was the use of shader-assited AA resolve, which nV chips aren't optimized for, but although CoJ is perhaps the one and only game using it, from Direct3D 10 point of view it's a standard procedure. Apart from that, nVidia complained about texture filtering, to which the developers replied that it actually benefits nVidia's GPUs (and we all know R6xx suck with AF, don't we?). Yet I agree that Call of Juarez should not be used as a game benchmark, since the game is so hopelessly boring.
 
I wouldn't say anything, if it was actually the game using it. But if you compare real-world in-Game FRAPS-Benchmarks to the integrated benchmarks, it seems, like al those specialties in the benchmark-flyby have no effect in-game.

Optics are there, but performance is quite reverse of what the benchmark shows.
 
=>CarstenS: Well I haven't been digging into the details. But nVidia's accusations certainly failed to mention your point.
 
=>CarstenS: Well I haven't been digging into the details. But nVidia's accusations certainly failed to mention your point.
Then i'd suggest you do some digging. We have benchmarked Call of Juarez quite regurlarly since the DX10-Enhancement-Pack came out and it seems that Radeons do considerably worse, once you really use a savegame & Fraps compared to the separate benchmark utility that was provided.

Call me biased, to use our own benchmarks to prove my point, but nonetheless:
http://pcgameshardware.de/?article_id=624436&page=12
-> click CoJ DX9
-> click CoJ DX10
 
Wait, so the benchmark utility performance figures are completely different with the actual ingame numbers?

Anyway think we are going too offtopic.
 
Wait, so the benchmark utility performance figures are completely different with the actual ingame numbers?

Anyway think we are going too offtopic.
Yes - but we should discuss this in a separate topic if need be.
 
If i may add, i know its R7xx thread, but just for comparison i add Geforce 280GTX score

perlin noise:
HD4850: ~ 335
HD3870: ~ 175
RV670X2: ~ 355
GTX 280: ~ 300

The 8800GTX gets ~ 150 in this test and it scales pretty linearly with shader capability. GTX 280 has a 100% increase in score with a 80% increase in flops over G80 so it looks like the MUL is more exposed but who knows how often it will be available. The pixel shader test score is only ~80% higher than G80 though.

Vertex shader performance seems to be unchanged from G80 (at least in 3dmark06).
 
Weird that techpowerup uses older forcewares and not ones with WHQL (177.26), at least Asus got them on CD with card.
 
Status
Not open for further replies.
Back
Top