So who's thinks their predictions were right about the nV40?

Joe DeFuria said:
Seiko said:
I thought we were finally going to see the end of this notion and set a 60FPS milestone with maximum quality.

In all seriousness....that will never happen until someone is able to establish an "image quality benchmark" and sum up the results in a single number.

So...never. :cry:

Not to mention the fact that as we become more and more CPU bound IQ will end up IMHO king of the hill.

The fact that it's actually more important doesn't make it easier to sell. ;)

I think you're right at this current momemt in time but I'm pleasantly surprised how many articles are bringing IQ to the readers attention and wonder how long the FPS wars will last. I honestly believe it's in the hands of the reviews to highlight the IQ element above the FPS once the FPS go over a certain amount. Of course that amount is subjective, but again if enough people state the same number it may just become the standard we're looking for.
I actually think [H] is trying to do this although until many more join in it's unlikely to succeed. Personally I hope it does as those additional FPS are simply going to waste :(

As for scaling, well that's a whole different topic ;)
 
Everything as I expected...except for...
PS performance.
Seeing Far Cry and HDR simulation, it seems only 50~60% faster than 9800XT.
It's probably just an immature driver issue...but still...I expected at least 70~80% boost.

Still, it's a very good vga. That's for sure.
 
Big Bertha EA said:
Too bad nobody benched the 6800U on Tiger Woods 2004...a very PS 2.0 intensive application....

I thought I saw it benched in one of the review listed in the link below listing various reviews out on the net:

http://www.nvnews.net/
 
No predictions. Just disappointed that with a 2 slot cooling system they couldn't see fit to have it exhaust the heat out of the system. That just doesn't make sense to me. I'd have to move my slot cooler just up under it to exhaust the heat, which would in essence make the cooling solution 3 slots.

And of course, the 2 molex requirement completely knocks it out for me. I have enough power, but not enough free dedicated lines.

Oh wells. Hopefully r420 can provide the same perf with one molex and a one slot solution, or two slot exhaust solution. :(
 
Uttar said:
No, seriously, where the NV40 *did* impress me is the AF implementation (I was assuming 8x AF max, and IMO being *able* to do angle dependancy is a GOOD thing, not a bad one),
I really don't think so. First of all, there is as yet no guarantee that the NV40 can actually look more like older nVidia chips in its anisotropic filtering algorithm.

Secondly, the angle dependent algorithm is nothing more than a worse approximation of the true function than what nVidia has offered previously.

I think that nVidia decided that since nobody cares that ATI has crappy angle dependent AF, they could save some transistors and do basically the same thing (though it is at higher accuracy, as seen by the lack of artifacts, so while it won't be as good AF as the GeForce4 had, it will at least not have the added aliasing I've been complaining about on the R3xx).

Bad nVidia. Don't get me wrong, I still want one, really bad, but I am disappointed by the drop in AF quality.
 
I really thought it would be 8x2 at best, so moving from 4x2 to 16x1 surprised me (as well as the fact that apparently they haven't misrepresented it, which I half-way expected to see.) I was surprised to see, as well, the two molex connectors, which I hadn't expected to see either. The size of the chip is surprising, as well, and I wonder what the 222M-transistor count will do to yields. I mean, in light of the trouble they had with nV30/5/8 on that score, it certainly seems a prudent observation as nV40 is far more complex. I guess we'll see when we start getting some firm shipping dates from nVidia board OEMs.

On the whole it is gratifying to see that nVidia is becoming competitive once again, as nV40 convincingly puts to bed the "Who needs DX9?" and "We only compete with Intel" apologies and evasions nVidia's been fond of recently. I think we can all relax with the firm conviction that all of the players are now pretty much agreed on the "future of 3d" and that it isn't DX8 after all...;)
 
Well, I predicted the RRP in £'s for the Ultra fairly accurately. "£400" I said. "No", they said, "that's like $800, that's silly, that would be a rip-off", they said. "Exactly", said I (retiring and stroking my whiskers).
 
Seiko said:
I overestimated the FSAA. I can't believe Nvidia still can't surpass ATIs.
In what area is ATIs algorithm better? The gamma corrected FSAA? In the leaked screenshots yesterday from HardOCP there was no visible difference between the Radeon 9800 XT and Geforce 6800 at 4x FSAA. So I don't see really any benefit of ATIs method.

Seiko said:
I missed the AF downgrade completely. I don't understand why Nvidia would loose their slim IQ advantage in this area. By doing so they can't really claim any IQ superiority.
Well, the ATI algoritm seems to be more performance effective, and that it is edge dependent isn't visible in most situations according to many many articles and debates over that issue. So, in my opinion a great move by nVidia.



Now, what I am curious about is how much performance can be improved with new driver revisions for PS2.0 and PS3.0 with compiler updates. Because it seems that the PS architecture of the new Geforce is very powerfull, but also quite complex. So, I personally don't expect that they've got their compiler instantly good at optimizing perfectly.
 
sonix666 said:
In what area is ATIs algorithm better? The gamma corrected FSAA? In the leaked screenshots yesterday from HardOCP there was no visible difference between the Radeon 9800 XT and Geforce 6800 at 4x FSAA. So I don't see really any benefit of ATIs method.

Weren't those shots messed up because of jpeg compression?
 
It turned out to be pretty much what I was expecting. Though it only yields about a 30% increase in Far Cry, and even less in NFS Underground and UT2K4, which is dissappointing.
 
ANova said:
It turned out to be pretty much what I was expecting. Though it only yields about a 30% increase in Far Cry, and even less in NFS Underground and UT2K4, which is dissappointing.
Yeah, but UT2004 is pretty CPU limited.
 
WaltC said:
I mean, in light of the trouble they had with nV30/5/8 on that score, it certainly seems a prudent observation as nV40 is far more complex.
I think the NV40's excellent performance really is good evidence that the NV3x core was severely broken. That is, it did not make remotely good use of the transistor budget. After all, when you can quadruple the number of pipelines, fix a number of performance issues, and barely reduce the theoretical maximum processing power of each pipeline without even doubling the number of transistors, something must have been seriously wrong with the design of the previous architecture.

I therefore stand by my previous postulate that the NV30 that nVidia originally meant to release was very different from the one that was released, that the NV30 we saw was one that was designed in a very short timeframe after process troubles prevented the release of the "original" NV30 design.
 
Chalnoth said:
I therefore stand by my previous postulate that the NV30 that nVidia originally meant to release was very different from the one that was released, that the NV30 we saw was one that was designed in a very short timeframe after process troubles prevented the release of the "original" NV30 design.

I thought your previous postulate was that NV30 was not broken (which is what most others were saying), just that it needed lots of "driver love".
 
sonix666 said:
In what area is ATIs algorithm better? The gamma corrected FSAA? In the leaked screenshots yesterday from HardOCP there was no visible difference between the Radeon 9800 XT and Geforce 6800 at 4x FSAA. So I don't see really any benefit of ATIs method.
Whether or not the gamma corrected FSAA is good highly depends on your monitor, so we should all expect to see wildly-differing opinions on this.

The benefits of gamma-correct FSAA can most easily be distinguished by looking at wireframe-type shots. From the shots we have currently, the ones from HardOCP's review that include a powerline come closest.

The image that, on your monitor, appears to most closely resembles a solid line is the one with the better FSAA for your monitor. The lack of proper gamma correction will cause a line to appear "dotted" when viewed from a distance. The amount of gamma correction that is proper will essentially depend upon the phosphor that your monitor uses.

On my monitor, both images show improper gamma (as should be expected), but I find that the nVidia shot looks a tiny bit better (which says to me that the gamma correction that ATI uses is far too high for my monitor). I'm using an NEC Multisync 97F (19").

Edit: Btw, if ATI's gamma correct FSAA had allowed adjustment of the gamma level, ATI's would be better, hands down. But since it is not adjustable, it's more of a wash, and highly dependent upon your monitor.
 
ANova said:
It turned out to be pretty much what I was expecting. Though it only yields about a 30% increase in Far Cry, and even less in NFS Underground and UT2K4, which is dissappointing.

I think the dilemma is which review to believe, if any? I've read about 6 and all seem to have different results as far as performance is concerned when testing the same game titles using the same criteria ...
 
I was dissapointed in the core/memory clocks, but regardless of that the performance was everything I'd hoped for. Still it's meaningless to me until I see the cheaper models reviewed. IQ improvements were about what I expected, but still not what I hoped for.
 
While I was not expecting another NV3x debacle, I was not expecting NV40 to be this good either. This is the biggest and most consisten raw performance gain between two Nvidia generations that I can remember. They really seem to have gotten their act together this time around. It's been long overdue for them to improve their AA and AF implementations too, but they could (or rather should) have done even more. Other than that, and the fact that there's no way in hell I can put one of these Ultras into my system (for power, noise, size and financial constraints), I am mighty impressed...

Now bring it on ATI, can't wait to see your reply! 8)
 
Joe DeFuria said:
Chalnoth said:
I therefore stand by my previous postulate that the NV30 that nVidia originally meant to release was very different from the one that was released, that the NV30 we saw was one that was designed in a very short timeframe after process troubles prevented the release of the "original" NV30 design.
I thought your previous postulate was that NV30 was not broken (which is what most others were saying), just that it needed lots of "driver love".
Yes. My original opinion was that it needed lots of, "driver love," and that nVidia was really screwed over by Microsoft.

However, the more I looked at the architecture, the more various design decisions just seemed downright stupid. For example: consider that FP registers incur a performance hit, but FX registers do not. This would imply that nVidia was actually spending more transistors to implement FP and FX registers separately, while at the same time not using enough transistors for full FP speed. That just seems ludicrous: why not simply increase the size of the FX registers, and convert them to FP?

I believe my opinion on this changed ~2-3 months ago now. I suspect that many of the strange idiosyncracies of the FX architecture are a result of a simple cut-and-paste of the NV2x pipelines that were then modified to add the desired features. This would seem to be in line with the fact that many of the non-shader systems seem nearly identical to the NV2x systems (i.e. texture filtering, AA). It really seems like the NV3x architecture's strange decisions are a result of time to market being the #1 driving force.

Edit:
Btw, I still think it would have been better for all involved if MS had exposed the FX12 format in PS 2.x. It really shouldn't have been challenging, even if it was only apparent that such a change should have been made by about the summer of the year the FX was announced.
 
pharma said:
ANova said:
It turned out to be pretty much what I was expecting. Though it only yields about a 30% increase in Far Cry, and even less in NFS Underground and UT2K4, which is dissappointing.

I think the dilemma is which review to believe, if any? I've read about 6 and all seem to have different results as far as performance is concerned when testing the same game titles using the same criteria ...

PREviews, not reviews. ;)

These are just nV built reference cards (some manually modified too) so the 'cooling solutions' aren't set in stone either. :rolleyes:

When retail cards hit the shelves & WHQL drivers are available > then all will be evident. 8)

.02,
 
Well I think I was the first to realise it could ship with GDDR3 almost 8 months, so I feel great about that calculation.

The actual 16 pipeline architecture and clean up of so many weak spots and openess of NVidia to reveal what is going on is impressive, as is its raw performance. Dave summed it up well - a much more parallel design.

I dearly want to see it run some PS3.0 demos once DX9.0c and new drivers are out.

And the big questions are price/availability and positioning relative to R420.

Well done NVidia!
 
Back
Top