NV40: Surprise, disappointment, or just what you expected?

Chalnoth said:
ChrisRay said:
Super Sampling DOES blur text tho, Do you have a Geforce 4 Ti right? Just Apply 4x Super Sample, Load Star Wars Galaxies, Everquest ect, You will notice a blur, Its not a gaussian blur like quincunx tho, it sharpens textures but blurs text.
Actually, my GeForce4 Ti died. Regardless, it doesn't support 4x supersampling anyway, so it doesn't matter.

I did have a GeForce DDR for some time, though, and I guarantee you that it did not blur text with FSAA.


Yes it does.. my Geforce 4 Ti 4200 can do every AA mode my FX card can d0, 2x2 OGSS is hidden in the drivers in D3d Mode, Just unlock it with Riva Tuner

4xS/8xS/6x and 4x 2x2 Super Sampling all provide a small Blur to my text on my Geforce FX and Geforce 4 Ti 4200, Surprises me you dont know about the 2x2 Super Samping mode, Been around since the Nv30 Launch.
 
The title of this post is difficult to answer without testing the R420 vs the NV40. The NV40 does not have much competition with the currently available hardware. Even without this important basis of comparison at this time, the results are encouraging, because things will only get better and better for what seems to be a pretty efficient architecture. NVDA has a much more focused design this time around with the 6 series, more clearly giving us an idea about what it can and can't do, and it shows in the benchmarks.

One thing that is very refreshing: NVDA is really opening up more to the community. They seem to be sharing more details about their cards. They seem to be moving away from the hype, and moving away from the driver optimizations, while giving users a little bit more control over what type of settings they can use.

From what I can see, the developers are really excited about the Geforce 6 series. Good to see, and we will all be very much looking forward to what ATI brings to the table (which will definitely be really good, I am sure of it).
 
The only thing I'm really disappointed about with GF6 is the AA. PS3.0, we'll just have to see if that's ever an issue since R420 probably won't support it and developer support will probably be limited at best.
 
Malfunction said:
I was pleasantly surprised! :D I expected about 10 to 15 frames more than the current cards and instead, got to see double the performance at 16x12! :oops:

The image quality is (imho) better on the nVidia side now which is just what I was hoping. While looking at the tail shots comparison on Tech Reports review, I was completely happy with not having the blurring that is being shown on the 9800XT. AA goes to nVidia as far as I am concern because of the non blurring. (I can handle mere difference in horizontal AA vs. blurring.)

Tech Report Tail Section


Um hate to blow your bubble but if you read the review he stated that in the non aa shots both cards were blured so the radeon was rendering it correctly. It was the 6800 that was not rendering the shot correctly. :rolleyes: :rolleyes: :?:
 
jimmyjames123 said:
The title of this post is difficult to answer without testing the R420 vs the NV40.

Within the context of the title, I don't really think you need R420 to formulate an opinion on NV40. Clearly they have delivered a part that not only has the features but has a massive performance leap over anything else previously available. Even if R420 edges it, you can still formulate an opinion on NV40 now based on the expectation and in that sense it has delivered.

For myself, the surprise is not that NV40 has delievered this performance, because I knew the specification for some time beforehand, and looking at the specification (in the assumption it was correct) then this is where I would have expected NV40 to be. The surprise, for me, is that this time the majority of what we were told beforehand has actually come to pass (which wasn't necessarily the case previously) and they have been fairly free and forthcoming with interesting architectural information - this is refreshing and welcome, and I hope this is something that is adopted long term and not just a one off.
 
The only thing I'm really disappointed about with GF6 is the AA.

I was somewhat disappointed in the fact that their 8x AA mode is practically useless. Their 4x RGMS AA quality seems to be about on par with the R3xx. Looks like NVDA wanted to focus on very high performance up to 4xAA/16xAF at high resolutions, which seems reasonable enough I guess.

PS3.0, we'll just have to see if that's ever an issue since R420 probably won't support it and developer support will probably be limited at best.

I'm not so sure about that. It really looks as if developers are taking strides to adopt the new technology in the 6 series. Here is what was said at nvnews:

NVIDIA sends word of the current and upcoming games that will support Shader Model 3.0, which is a key feature of the GeForce 6 series of GPUs from NVIDIA. From what I understand, the runtime version of DirectX 9.0b is frozen and supports Shader Model 3.0. What Microsoft will be updating is the DirectX 9 Software Developer Kit (SDK), which will contain updated shader debugging tools.
Lord of the Rings, Battle For Middle-earth
STALKER: Shadows of Chernobyl
Vampire: Bloodlines
Splinter Cell X
Tiger Woods 2005
Madden 2005
Driver 3
Grafan
Painkiller
FarCry
 
The Baron said:
if the 6800 Ultra is the only thing that supports it, say hello to the PS1.4 Bin. If it doesn't support it well, it's DOA. If it supports it very well, that would be interesting, but I still don't think it will catch on until there's a broader market base. Hell, look at PS2.0, and NV and ATI were supporting that since August 2002. Heh.
Remember that the entire NV4x line will support PS 3.0. The rest of the lineup will be released by the end of the year, and will include parts that scan from the top to the bottom of the market.

With the incredible performance of the NV40, I am expecting good things from the rest of the lineup. I think we can expect to see lots of PS 3.0 parts to be sold this year, even if they are only sold by nVidia. I definitely expect that at the very least, the NV4x parts for the mid-low range of the market will be the parts to buy this fall.
 
Performance is related to die size and, well, NV40 is just big.

When you consider the sub parts you have to think that numbers of quads will demark them in terms of performance and, given the size of NV40, to get the lower end products its tough to see how the wafer cost / performance is going to match up when you consider what processes are available. Realistically we have 130nm IBM FSG, 130nm TSCM FSG, 130nm low-k TSMC, 110nm TSMC and possibly SIO from IBM. In terms of costings, which will be important for the lower end parts, you have to look at 110nm - however, you'll notice that I didn't list a low-k 110nm process, which will have limitations on the overall speed of the transistors.
 
Intel and ATI chips will only support PS2.0, and chips from those vendors currently make up over 50% of the market. I still think the volume of PS3.0 versus PS2.0 chipsets shipped this year will be overwhelmingly in favor of PS2.0. The real question is will there be a significant performance increase or IQ increase from using PS3.0 in those games versus PS2.0--that we can't judge yet at all.

I'm pissed about the lack of more innovative AA. I would have thought 8x sparse sampled multisampling for sure, but nope, apparently not. I had a Stupid Hope of stochastic antialiasing ever since that one rumor that lasted for all of two days last June or something, but I knew that wasn't going to happen; still, using supersampling for anything other than a legacy option is pretty stupid for a card that's as advanced as the NV40 in everything else.

No, 4x is not enough. :)

And note that those games support SM3.0, but it doesn't specifically say PS3.0. It's quite possible that many of those are VS3.0, which seems to be a more important leap than PS3.0.

Dave, I recall an NVIDIA press release about its partnership with TSMC etc. al that stated that 110-nm at TSMC would not be ready for general-purpose chipsets until next year. Creating low-voltage chips is currently possible, but I kinda doubt that NV40 would classify as low-voltage.
 
DaveBaumann said:
Performance is related to die size and, well, NV40 is just big.

When you consider the sub parts you have to think that numbers of quads will demark them in terms of performance and, given the size of NV40, to get the lower end products its tough to see how the wafer cost / performance is going to match up when you consider what processes are available. Realistically we have 130nm IBM FSG, 130nm TSCM FSG, 130nm low-k TSMC, 110nm TSMC and possibly SIO from IBM. In terms of costings, which will be important for the lower end parts, you have to look at 110nm - however, you'll notice that I didn't list a low-k 110nm process, which will have limitations on the overall speed of the transistors.

interesting so 110nm would only be good for high volume lower performance chips then. So we should see it on the 150$ and below cards or around about there.
 
Within the context of the title, I don't really think you need R420 to formulate an opinion on NV40. Clearly they have delivered a part that not only has the features but has a massive performance leap over anything else previously available. Even if R420 edges it, you can still formulate an opinion on NV40 now based on the expectation and in that sense it has delivered.

I see your point, and I agree with you. I would think that most impartial enthusiasts and developers would be happy about what we have seen with the NV40. Would they be shocked? Not necessarily. But I think most people should be pretty content for the moment being. What is perhaps most gratifying is that NVDA seems to be making a sincere effort in working with the enthusiast community and the developers instead of alienating them.
 
The Baron, I hear you. I think that DaveB was right when he talked about the NV40 being an evolutionary move in the chain. Most people who speculate that the move to NV50 will be more revolutionary are most likely correct in their assessment, I think.
 
Yes, but why couldn't evolution have better AA, damnit! :p

No denying I really want an NV40 to play with. A big part of it is the lack of games that actually push GPUs right now (Far Cry is basically it), so pushing up AA and AF is a necessity. Come HL2 and Doom 3, I probably won't care nearly as much as I do right now.
 
Speaking of Doom 3...i am curious to see what type of gains the Ultrashadow II technology will have on the NV cards. Is this a feature that can be turned on/off? Have you heard any new rumors about this game being bundled with the Geforce 6 series?
 
jimmyjames123 said:
Speaking of Doom 3...i am curious to see what type of gains the Ultrashadow II technology will have on the NV cards. Is this a feature that can be turned on/off? Have you heard any new rumors about this game being bundled with the Geforce 6 series?

Exactly the same benefit as Ultrashadow I since their seems to be no difference.

I hope carmack doesn't do all the bound box lighting claculation on the non NV cards otherwise we are going to be wasting cpu time.
 
It really looks as if developers are taking strides to adopt the new technology in the 6 series

Isn't there a glaring similarity between all of these games? And try and think of the games not on this list and their difference with that list of games
 
The Baron said:
Dave, I recall an NVIDIA press release about its partnership with TSMC etc. al that stated that 110-nm at TSMC would not be ready for general-purpose chipsets until next year.

I don't think you have recalled that correctly.
 
DaveBaumann said:
Performance is related to die size and, well, NV40 is just big.
Right. But if you consider that a part with half the pipes would, according to current benchmarks, still be faster than an equivalent R3xx part (in shaders, at least) while having a similar number of transistors and much better feature support, I think the other parts in the NV4x lineup have a lot of promise.

Realistically we have 130nm IBM FSG, 130nm TSCM FSG, 130nm low-k TSMC, 110nm TSMC and possibly SIO from IBM. In terms of costings, which will be important for the lower end parts, you have to look at 110nm - however, you'll notice that I didn't list a low-k 110nm process, which will have limitations on the overall speed of the transistors.
I thought we'd heard of rumors of .11 micron with low-k. Well, regardless, any die shrink/low-k used on the lower parts will only serve to increase performance. Since I don't think ATI will be releasing some mid-low R4xx parts for a while yet, it seems to me that the NV4x parts for these markets will be left almost unchallenged, at least for a little while.
 
DaveBaumann said:
The Baron said:
Dave, I recall an NVIDIA press release about its partnership with TSMC etc. al that stated that 110-nm at TSMC would not be ready for general-purpose chipsets until next year.

I don't think you have recalled that correctly.
http://www.beyond3d.com/forum/viewtopic.php?t=10442&highlight=tsmc

TSMC began 0.11 micron high-performance technology development in 2002 and product-qualified the process in December of 2003. Design rules, design guidelines, SPICE and SRAM models have been developed and third-party compilers are expected to be available in March. Yields have already reached production-worthy levels and the low-voltage version has already ramped into volume production. The 0.11 micron general-purpose technology is expected to enter risk production in the first quarter of next year.
Could just be me not having any idea what the exact meanings of "general-purpose technology" and "risk production" are, but it sounds like 0.11 micron isn't ready yet.
 
Back
Top