So who's thinks their predictions were right about the nV40?

Ailuros, I just think it is a bit premature to say that using IBM as a resource was not a good move. From what I have read on the forums, the NV36 fabbed by IBM was the smoothest process that they have ever had. However, one way or another, NVDA will be forced to seek more efficient manufacturing processes. I have heard that "SOI" is something along these lines.
 
Biggest problem is to get more bandwidth. To really show non-synthetic performance increases, a 600Mhz R420 is gonna need more than 1.2Ghz RAM. Both NVidia and ATI are gonna be fighting over those few scraps of 800Mhz RAM :)
 
Samsung (for NVDA) and Micron (for ATI) both have 800Mhz+ samples ready now, correct? I read somewhere that Samsung initially had some problems, but now is getting much better yields on high clocked ram.
 
jimmyjames123 said:
Ailuros, I just think it is a bit premature to say that using IBM as a resource was not a good move. From what I have read on the forums, the NV36 fabbed by IBM was the smoothest process that they have ever had. However, one way or another, NVDA will be forced to seek more efficient manufacturing processes. I have heard that "SOI" is something along these lines.

It's not premature at all; I didn't just pull it out of my head.

NVIDIA went in it's press conferences from enthusiastic over IBM to underwhelming; not only that, I'm aware of a couple of moves from NV I can't really mention that support that change of "feelings".
 
DemoCoder said:
Biggest problem is to get more bandwidth. To really show non-synthetic performance increases, a 600Mhz R420 is gonna need more than 1.2Ghz RAM. Both NVidia and ATI are gonna be fighting over those few scraps of 800Mhz RAM :)

Copy that.
 
Well, fortunately they still do seem to have a working relationship with both IBM and TSMC, assuming that insider info does not say otherwise :D

Can't wait to see how the NV40 vs R420 battle turns out. Should be a good one. I expect the victories to go both ways, depending on 1) what games are used, 2) what AA/AF settings are used, and of course 3) what resolutions are used.

What may actually be even more interesting and of more relevance to a greater amount of people is how the $299-$399 battle turns out. The only thing we can really do is wait and see ;)
 
Yes of course they do. NVIDIA is still one of TSMC's biggest customers. Switching foundries for done chips that are in mass production is out of the question.
 
My point is that I expected it to be a lot faster.

IQ and performance do not go hand in hand because regardless of the performance on my R300 I always play at 1024x768 with 4xAA and 16XAF, even when the games I play run at 200fps, I do not bump up the res.

I hate having my resolution too high or too low. 1024x768 is my sweet spot.
It makes no difference to me wheather I run games at 100fps/60fps/10fps or 300fps, the resolution stays.

jimmyjames123 said:
What reasoning did you use to come up with the "I expected it to be at least 30% better"? That makes no sense at all. % performance differences vary depending on what game is tested and what resolutions and AA/AF settings are used, period. In some games and at some resolutions, the NV40 was 2-3 times faster than what was previously regarded as the fastest chip on the market, using raw drivers and most likely conservative clock speeds too! Things can only go up from here as the game developers will begin to spend time coding optimally for the NV40, and as the driver team gets a chance to smooth out the drivers and work hand in hand with the game developers.

You also need to realize that speed and iq go somewhat hand in hand. The NV40 is fast enough to keep framerates extremely high using higher resolution and/or higher AA/AF than the current generation hardware. That in itself implies better image quality per given frame rate.


On another note:
I am going to skip this generation of cards unless I can afford to throw money away.

nVidia have made some nice improvments, by the looks of things but I would prefer to see how the commercially available cards perform before I make my final judgement.

So far my judgement has been based on these previews, which don't exactly show the full picture of how the cards are going to perform and look on the average gamer's system.

I'm just more cautious these days, nVidia's cheating fiasco has put me off a little from their products.

So I guess I'm going to have to wait and see.

On another note:
http://www.gamershell.com/news_AnFPSin96kbSurelynot.shtml

Can Dave bench this 96Kb FPS on his NV40?
It's amazingly shader intensive by the looks of it. :D
 
Ok, you are being very vague again. Higher performance in what game? What type of fps numbers are you looking for? What would be your idea of good "expected" performance from the NV40?

By your definition, no new card will ever be fast enough for you, because whether it is 60 or 100 or 300 fps, it doesn't matter to you.

You have to think about NVDA's goals with the current 6800 Ultra. Their goals are to provide users with super high processing power so that they can crank up both resolution and AA/AF levels (up to 4xAA/16xAF realistically) in virtually all of their games without suffering from low framerates that make the game unplayable. This is a totally different philosophy than trying to maximize image quality for any given resolution, like 1024x768. It is a different philosophy really.

IQ and speed do go hand in hand. Having glorious iq will get you nowhere if you do not have the framerates to keep the game playable. That point is hardly debateable.
 
I'm not surprised. Course something tells me I'll have some down and dirty time with a MSI 6800 Ultra in a few days :). Hope my brand new 450W PSU can handle it :)
 
DemoCoder said:
Biggest problem is to get more bandwidth. To really show non-synthetic performance increases, a 600Mhz R420 is gonna need more than 1.2Ghz RAM. Both NVidia and ATI are gonna be fighting over those few scraps of 800Mhz RAM :)
I'm thinking that ATI will be in pretty decent shape. I wasn't overly impressed with the 8500's bandwidth efficiency, especially compared to GF3/GF4, but the 9500 PRO showed just how well the R300 architecture can do with half the bandwidth. Yes, it was slower than the 9700, but not by a whopping amount.

I think 9500 PRO performance times 4 (double the pipes, double the core/mem frequency, double the bus width, similar pipelines) is a good way to make preliminary estimates on R420 performance. Looking at ShaderMark 2.0 scores or non-CPU limited gaming scores for the 9500 PRO, I think they'll be trading victories.

Of course, ATI will probably have to sell their card for less since it doesn't have PS/VS 3.0, and I would rather buy NV40 for that reason since I do some 3D coding now and then. R420 should cost less too since it should consume less power, need a smaller HSF, and have a smaller die.

In the end, though, I won't buy either since I'm quite content with my AIW9700P right now. When we see 32 unified shading pipes (possibly superscalar) in a card, then it'll be time for me to splurge :)
 
jimmyjames123 said:
Ok, you are being very vague again. Higher performance in what game? What type of fps numbers are you looking for? What would be your idea of good "expected" performance from the NV40?

By your definition, no new card will ever be fast enough for you, because whether it is 60 or 100 or 300 fps, it doesn't matter to you.

IQ and speed do go hand in hand. That point is hardly debateable.

Well no new card is always fast enough for previous and current generation games but never fast enough for future generation games.

Like you said, I should wait a while before giving out my final judgements, so I may have jumped the gun a little.

The only example you have given me for IQ and speed to go hand in hand is that you can knock up the resolution.

This is hardly what I call an IQ increase, the reason I use 1024x768 is because it looks like the best resolution on my monitor.
Much better than 1280x960 or 800x600.
 
K.I.L.E.R said:
IQ and performance do not go hand in hand because regardless of the performance on my R300 I always play at 1024x768 with 4xAA and 16XAF, even when the games I play run at 200fps, I do not bump up the res.
Then I guess that means you should buy a new monitor before buying a new video card.
 
Chalnoth said:
K.I.L.E.R said:
IQ and performance do not go hand in hand because regardless of the performance on my R300 I always play at 1024x768 with 4xAA and 16XAF, even when the games I play run at 200fps, I do not bump up the res.
Then I guess that means you should buy a new monitor before buying a new video card.

Not going to happen.
I like my monitor and I sure as hell am not going to upgrade it anytime soon.
My monitor isn't that old and is pretty good. I get a nice sharp vibrant image out of it.
 
Well no new card is always fast enough for previous and current generation games but never fast enough for future generation games.

I guess I'm not sure what you are trying to say here. If you read all the reviews on the NV40, you would have a pretty good idea that NV40 is about as futureproof a card as we have ever seen, unquestionably.

The only example you have given me for IQ and speed to go hand in hand is that you can knock up the resolution.

Not true. You can also bump up AA and AF levels on a more powerful card, keeping resolution constant, as long as framerates are kept at levels that are playable. Having the ability to use AA and AF in some of the very demanding new games, even at 1024x768 resolution, gives an instant large advantage in IQ.
 
Then it would be pointless to upgrade your video card (unless you're a programmer) until games are released that show significant differences while using PS/VS 3.0 and the associated features (i.e. FP blending/filtering).
 
jimmyjames123 said:
It's not like NVDA is going to sit still with a 400Mhz core clock, especially when we all can see how efficient the underlying NV40 architecture is, and how much performance can potentially increase with core clock speeds.
Well, of course not. We can be pretty much certain that the NV45 is on the way for a fall release, and with a die shrink to .11 micron with low-k rumored, 600MHz for the NV45 seems conservative.
 
Chalnoth said:
Then it would be pointless to upgrade your video card (unless you're a programmer) until games are released that show significant differences while using PS/VS 3.0 and the associated features (i.e. FP blending/filtering).

I know. I just wanted something that would smash my R300's IQ.
I am a programmer.

I have mangled and created some crappy OGL demos but then I didn't bother going further for now.

I have other responsibilities.

I guess I'm not sure what you are trying to say here. If you read all the reviews on the NV40, you would have a pretty good idea that NV40 is about as futureproof a card as we have ever seen, unquestionably.

I haven't read every review out there but I have read 3-5 reviews. I can't remember exactly how many.
It may look like the most futureproof card now but that may change in about a few months or so.

I'm not saying it will change, just that it may change. I'm trying to keep an open mind.

Not true. You can also bump up AA and AF levels on a more powerful card, keeping resolution constant, as long as framerates are kept at levels that are playable. Having the ability to use AA and AF in some of the very demanding new games, even at 1024x768 resolution, gives an instant large advantage in IQ.
[/qoute]

Well I will keep using 1024x768 with 4xAA and 16xAF regardless of my framerate. These newer games are no slower on my system than games from last year. IE: BF1942 and BF:V run the same with maximum settings.

Then again I am quite CPU limited so AA and AF are mostly free anyway until I upgrade my CPU.
 
Chalnoth said:
jimmyjames123 said:
It's not like NVDA is going to sit still with a 400Mhz core clock, especially when we all can see how efficient the underlying NV40 architecture is, and how much performance can potentially increase with core clock speeds.
Well, of course not. We can be pretty much certain that the NV45 is on the way for a fall release, and with a die shrink to .11 micron with low-k rumored, 600MHz for the NV45 seems conservative.

Is nv45 moving to tsmc? I haven't heard that ibm has worked out their low-k issues yet.

I think the fall refresh parts will need to be doing more about memory than core speeds in any event.
 
Back
Top