NVIDIA 60.72 Fiasco

Joe DeFuria said:
The R420 really would be more accurately called the R380 or something similar.

Whatever makes you happy. :rolleyes: The NV40 should actually be called the NV30...becuase it would be best for everyone concerned if the NV3x never existed at all and was erased from memory.

nahhh, if the nv30 hadnt existed, i wouldnt have been able to watch the people at nvnews sweat for a year and a half.

hmm. kind of like they're sweating now!
 
Joe DeFuria said:
Whatever makes you happy. :rolleyes: The NV40 should actually be called the NV30...becuase it would be best for everyone concerned if the NV3x never existed at all and was erased from memory.

Damn, beat me to it. But, yes, we'd have more games with 2.0 support this year were it not for NV3x hardware.
 
Joe DeFuria said:
Whatever makes you happy. :rolleyes: The NV40 should actually be called the NV30...becuase it would be best for everyone concerned if the NV3x never existed at all and was erased from memory.

LOL :LOL:
 
Chalnoth said:
Kombatant said:
The architecture is not THAT new to warrant immature drivers. And besides, I believe our topic of discussion here was questionable "optimizations" and not driver immaturity.
Not yet available isn't new? Since when?

Since the time nVidia decided to improve the NV3x architecture and keep it as basis for it's NV4x one, and not redesign the whole chip I believe. It's the exact same thing Geforce4 was to Geforce3, if that gives you a better understanding of my argument.
 
Kombatant said:
Chalnoth said:
Kombatant said:
The architecture is not THAT new to warrant immature drivers. And besides, I believe our topic of discussion here was questionable "optimizations" and not driver immaturity.
Not yet available isn't new? Since when?

Since the time nVidia decided to improve the NV3x architecture and keep it as basis for it's NV4x one, and not redesign the whole chip I believe. It's the exact same thing Geforce4 was to Geforce3, if that gives you a better understanding of my argument.

Hehe. I love how these forum arguments try to appear intellectual but they all boil down to little boys (and girls :oops: ) trying to scratch each others eyes out.

Anyway, Geforce3 ---> Geforce4 ==== NV30 -----> NV40 :?
 
Some of you people really need to get a life.

i mean, nvidia is supposed to have AWESOME drivers, right? so why is it that all of a sudden, all of these issues pop up:

Beta drivers anyone?
Next time Microsoft releases a Beta of their OS are you going to jump down their throats if something doesnt work correctly on an unreleased version of their software?

So what happens in a month when the final drivers come out with the problem gone? Let me guess you dont say anything.
 
Maintank said:
Some of you people really need to get a life.

i mean, nvidia is supposed to have AWESOME drivers, right? so why is it that all of a sudden, all of these issues pop up:

Beta drivers anyone?
Next time Microsoft releases a Beta of their OS are you going to jump down their throats if something doesnt work correctly on an unreleased version of their software?

So what happens in a month when the final drivers come out with the problem gone? Let me guess you dont say anything.

mmm, yeah, beta drivers are supposed to get WORSE with each edition!
 
Snarfy said:
mmm, yeah, beta drivers are supposed to get WORSE with each edition!

But why do you care so much? Do you supervise the driver team? Does their poor work affect your christmas bonus? :LOL: :LOL:

I would be pissed if the shipping drivers sucked ass and I owned their hardware but beating on the betas?....come on. Nvidia has done a lot of other more deserving things for you to bitch about haven't they?

And how many of you people with a bazillion posts actually play games anyway :p
 
Kombatant said:
Chalnoth said:
Kombatant said:
The architecture is not THAT new to warrant immature drivers. And besides, I believe our topic of discussion here was questionable "optimizations" and not driver immaturity.
Not yet available isn't new? Since when?

Since the time nVidia decided to improve the NV3x architecture and keep it as basis for it's NV4x one, and not redesign the whole chip I believe. It's the exact same thing Geforce4 was to Geforce3, if that gives you a better understanding of my argument.

Just curious, what particualr part of NV3x architexture served as the basis of NV4x?
 
well from the dave orton interview one can conclude that ati intends to melk the r300 architecture til directx next......

Sabastian wrote:
I gathered that instead of this pin pong going from scratch at separate labs they will work together. I think basically the R300 core will be the basis of all their designs until DX next arrives.(quite speculative.) I figured that this comment below articulates that with the exclusion of the idea that a new API would drive a new design.

DaveBaumann wrote:
We have a winner!

http://www.beyond3d.com/forum/viewtopic.php?t=12137&postdays=0&postorder=asc&start=20
 
mmm, yeah, beta drivers are supposed to get WORSE with each edition!

And what if they do? How does it exactly hurt you if the final revision works fine?
 
stevem said:
Chalnoth said:
It wouldn't be any different at all to simply add pipelines. They're completely parallel.

The difference in the drivers will come down to shader execution.
I'm not convinced it's that simple. The complexity of keeping all the pipelines as busy as possible & maximizing IPC seems challenging. Scheduling will be a key task. The driver/compiler teams are going to be very busy...
No, because the quads are independent. The only remotely challenging aspect would be managing memory bandwidth accesses.
 
Maintank said:
mmm, yeah, beta drivers are supposed to get WORSE with each edition!

And what if they do? How does it exactly hurt you if the final revision works fine?

it's not a problem at all.... unless they send those drivers to reviewers and it ends up making the card bench faster than it does with the realease drivers
 
Chalnoth said:
No, because the quads are independent. The only remotely challenging aspect would be managing memory bandwidth accesses.
Surprisingly, we have divergent views... Quad independence is a given with these architectures. How useful/likely is it having one (or more) quads underutilized? "The only remotely challenging aspect" is memory management? I hardly think so. Memory management is largely a HW issue with both parts featuring fairly comprehensive memory controllers, crossbar architectures, etc. If you mean optimizing fragment pipeline, that was my point. Could be an interesting Q for OGLGuy, etc, if they feel inclined.
 
trinibwoy said:
...
And how many of you people with a bazillion posts actually play games anyway :p

Just a suggestion, but you might try criticizing the posts people write instead of constantly criticizing the people who write the posts--especially, when all you can do is ask questions that question the motivations of those people simply because you cannot understand what's written in their posts (Like the silly question above, for instance.)

Basically, the majority of your responses remind me of the proverbial hotel bellhop listening to Einstein present a lecture on relativity who hears the whole thing but walks away snickering and thinking "What a clod!" because the only thing he registered was that Einstein's shoelaces were undone.

Really, constantly complaining that none of "you people" actually understand any of the topics they've addressed in the posts they've written simply underscores the likelihood that it isn't they who are confused--but it's you, instead. Just a thought you might like to turn over...(You also might like to ask yourself how many zeros are in the number "bazillion" so that you'd have a more precise idea of the number of posts you're thinking about.) Heh...;) I must really be bored...:D
 
stevem said:
Quad independence is a given with these architectures. How useful/likely is it having one (or more) quads underutilized?
No more likely than with fewer quads running at once. This is nothing like CPUs, which frequently aren't able to run parallel units because they have to deal with data dependency. Even with branching, there is still no data dependency between quads, so there is no reason for there to be any issues keeping all quad pipelines full.

A quad pipeline will be in use as long as there is still data being fed from the vertex units. This obviously requires more cache, but that has nothing to do with drivers.

"The only remotely challenging aspect" is memory management? I hardly think so. Memory management is largely a HW issue with both parts
That's the point. It's not a driver issue. With more quads you have the problem of making sure each pipeline has all of the data it needs. This is a memory management issue.
 
it's not a problem at all.... unless they send those drivers to reviewers and it ends up making the card bench faster than it does with the realease drivers


I think the reviewers are smart enough to let you know of any problems with "beta" drivers. When the first wave of cards comes out the official drivers will come with them and there will be a whole slew of reviews.

If anybody buys a card based on beta drivers they get what they deserve. I wouldnt get your panties in a bunch until the retail version comes out. If there is the same bugs as before you wont have much to worry about because people wont be buying the card anyways. Who wants to drop 500 bucks on a card that doesnt work?
 
Maintank said:
Some of you people really need to get a life.

i mean, nvidia is supposed to have AWESOME drivers, right? so why is it that all of a sudden, all of these issues pop up:

Beta drivers anyone?
Next time Microsoft releases a Beta of their OS are you going to jump down their throats if something doesnt work correctly on an unreleased version of their software?

So what happens in a month when the final drivers come out with the problem gone? Let me guess you dont say anything.
Let me guess what happens if htey come out and shit is still broken:
you and the other nVidia apologists blame it on immature drivers, and tell us to wait some more.
I mean, come on.
 
Let me guess what happens if htey come out and shit is still broken:
you and the other nVidia apologists blame it on immature drivers, and tell us to wait some more.
I mean, come on.


You know I came to this board for some good old honest mature conversation about GPU technologies. I have been here 2 weeks and offer my opinions on things and every single time somebody whose political agenda seems to coincide with ATI doesnt like what I have to say it is I am an Nvidiot.

I really thought this msgboard would be different but it appears this place is also overrun with fanbois from both sides although decidedly in ATIs favor. Such a shame. But please do show me where I am really siding with once manufacturer. I will call like I see it and if you dont like it that hardly qualifies me as somebody who is on Nvidias side. And if you think me pointing out that these are beta drivers qualifies me as a fanbois then maybe you need to sit back and think about just how close you really feel to a piece of hardware.

If the retail drivers come out with the same problems then we all need to get to the bottom of it. But chances are they wont have the issues and todays little bickerfest will mean nothing except to qwell the fears of a few select fanbois.
 
Back
Top