demalion said:
Well, I'm in a fair bit of amazement that such rumors are the majority of what they've managed to deliver successfully, and that feeling has served to numb any annoyance, I guess.
"...what they've managed to deliver..."Which might be...? It's Feb 8th, and as far as I know nVidia is still shipping the same product line it was shipping six months ago. Haven't shipped anything of note since. They were going to ship a product they declared competitive with ATI's higher-end R300 products, but as we now see that has been scrapped. Supposedly, in a few weeks they'll be shipping--or rather their OEMs will be shipping--a 9500Pro-level competitor. But then that doesn't qualify as "...what they've managed to deliver successfully.."(since they haven't delivered it yet), does it? So I'm a little puzzled by your remarks here.
Or do you mean you are sufficiently impressed with their past achievements to think that has a significant impact on what their future achievements are likely to be?
Eh? No, they do have a chip that competes fairly well. It just isn't feasible to release it as a product. Also, the non Ultra still competes, just not very successfully from the standpoint of those who picture nVidia as the performance leader. The Ultra has served its purpose of providing benchmarks that can be used to show the "GeForce" at an advantage to the "Radeon"
Nope, the non-Ultra doesn't compete on the high end. And the chip in the non-Ultra is the same one that would have been in the Ultra, only not over volted and over clocked and dustbusted. It's gratifying to see that you are not among those unfortunate few who see nVidia as any kind of a "performance leader." Reading you say that puts some of your other remarks in context (some of them, *chuckle*.)
Hmm...well, yes there are, but why are you telling me? I'm the guy who was lambasting people for making assumptions after E3 that it was preposterous that the R300 couldn't be faster than the nv30.
I was responding to your comments which I interpreted as a linkage between nv35 and nv30, which comments seemed to state the failure of nv30 on the high end was some sort of positive message about the state of nv35 development. If you tell me that was not your intent I withdraw the criticism.
Do you think any of this is news? Well, I'm not sure what you mean by "too many heatsinks" but I'm not particularly curious...
This isn't the nvnews forums Walt, nor Rage3D, you don't have to keep pointing out things like this when no one is contesting them (atleast not at such length). All I was commenting on (read the text again) was specifically that I don't see any reason at all to assume the nv35 is necessarily delayed because the nv30 was so late...hence terms like "lends validity to the rumors".
Is "nvnews/Rage3D" somehow pertinent here? I don't recall mentioning either myself.
I took your remarks to mean that you felt that there was only one reason to cancel nv30 Ultra--and that was nv35 sitting in the wings. That's how I interpreted your original remarks. I was simply pointing out that there were many reason to have cancelled the Ultra aside from whether or not anyone has expended 1 man hour of work on "nv35" at this time--of which there is no proof whatever, that's all. If you're saying you never meant to imply that the Ultra was cancelled because nv35 was ready to take its place, I'll accept that and again withdraw the criticism.
Specifically, I think this leaves room for the nv35 to come out before fall (and presumably the R400...I don't think ATi has a great deal of reason to hurry the launch of that even if they could...I think the R350 is likely to compete well enough with the nv35), and opportunity to re-associate nVidia and the GeForce, for whatever amount of time, with the concept of "performance leadership".
I think ATI is going to be smart on this and not "pace itself" according to some imagined need to stay "equivalent" with nVidia. I think ATI will strive to put as much of a performance distance between itself and nVidia as it can manage, as quickly as it can mange it, without overlapping its product spread too much. It was nVidia after all which started but couldn't maintain its so-called "six-month product cycle, " which was one of the things that gave 3dfx such a bad time (but 3dfx's problems were mostly self inflicted, sad to say.) Therefore, I think it will be good for ATI to use nVidia's tactics against it to the extent it is able to do so. ATI is not trying to maintain any sort of parity--they are trying to win back marketshare and mindshare, and the only way to do that is to get those things the same way nVidia originally took them from companies like 3dfx and ATI, and that is to set a pace of development and production such that it leaves your competitors breathless and always in a lag to catch up. The signs I see at ATI seem to point in this direction.
This does leave technical issues to be worked out, and I don't have the confidence in nVidia engineers that I would with ATI engineers at this point, but even just adding a 256-bit bus would help the GF FX catch up quite a bit, even before considering the other ideas the engineers may have in mind.
Of course, but this underscores the need for ATI to press ahead. They will have been shipping a 256-bit bus product about one year prior to nVidia doing it, and by late this summer they will be preparing to ship, if not shipping, their first .13 micron product. Which gives them several material advantages over nVidia--first, that TSMC will have a much better .13 micron process for them to use, and secondly that ATI's R400 will not be a brand new architecture, but based heavily on R300--whereas nVidia's first nv30 at .13 microns was a brand new architecture. I'm certain that R400 will bring some new things to the table, but I don't see them dropping such a terrific architecture anytime soon, or until they have something better to replace it. So I can see that as nVidia catches up ATI will simultaneously keep moving farther ahead. That's the idea, anyway--to keep nVidia in perpetual catchup mode, if possible. (At any rate, that's the way I would play it.)
Repeating myself...given the prior hints of the nv35 being the focus of intensive "debugging", it seems likely, in my opinion, that this info about cancelling the 5800 Ultra parts strengthens the likelihood of rumors of a May/June launch schedule. If you disagree with this, a brief reply like that at the end could have sufficed....
I only disagree with the inference that the cancellation of the Ultra has anything whatever to do with "nv35"--about which is known.....nothing. If I could always read the minds of the people whose posts I respond to perhaps I could do better--but then so could they if they could read mine...
Nowhere do I indicate that I disagree that the 5800 Ultra is a flawed part, and I've mentioned the flaws prior. I don't mention them again because they've been mentioned quite a few times already....
Granted--again, what I disagreed with was any linkage to the proximity of "nv35" and the cancellation of the nv30 Ultra product. That's what I meant about not wanting to see any more rumor mongering--spin, if you will--I'm kind of tired of spin. Now, if nVidia makes a statement to the effect that it cancelled the Ultra because it has a much better thing ahead in the nv35, which it plans to ship in the June/July time frame--then that moves the topic out of the rumor category. Somehow, I don't think nVidia is going to be making any more announcments about the date it will be shipping new chips, though...
Not until it knows it can make such announcments with confidence. But especially here what I believe is that as of a couple of weeks ago nVidia had every intention of shipping the Ultra regardless of how impractical it was. Something happened in the last couple of weeks to restore their sanity over there...I'm glad of it.
Yes, yes....similar outlooks have been well established. For my part, that is why I was using terms like "sane"...the Ultra just strikes me as a computer OEM dud.
more reasons that I thought were discussed adequately, and I don't see my post contesting.
Again, it was the linkage that I felt you were making between the hypothetical "nv35" and the cancellation of the nv30 Ultra that put me on that path...
Now that I understand you weren't linking the "nv35" to the cancellation of the nv30 Ultra, I suppose I can cheerfully withdraw the criticism, as I agree that the two have nothing to do with each other.
I think you make a good point, and I tend to agree. See above with my later post about the memory clock speed.
Yes, I saw where you'd said that in another post--unfortunately after I'd already posted...
I think we agree that adding a little bandwidth to nv30 non-ultra isn't going to help it, but would simply drive up the cost.
Well, I've discussed this before...clocking the RAM the near the same frequency as the core with a 128-bit bus is more limiting than clocking near the same frequency with a 256-bit bus. Each card having roughly the same fillrate, this indicates a situation where the GF FX architecture is much more likely to "choke" as I termed it, and I think make it more likely to get greater returns from increasing RAM clock frequency (assuming there are no issues with such a memory clock disparity between core and RAM...I assume nVidia has their interface well in order).
Well, I'd call that simply a limitation of a 128-bit bus, but that's neither here nor there...Heh-Heh...I don't think at this point that I'd (personally) make any assumptions about any nVidia architecture and its performance beyond that associated with the GF4 series of chips...
But I agree the returns in performance are likely not to be deemed worth it for the increase cost, though I don't have any definite idea of the cost difference.
nVidia does, and as they aren't doing it, I guess we're right...
I'm also not convinced that the RAM on the GF FX is best considered to be "DDR-II" in regards to latencies. But that's another discussion (no, really... we've had that discussion in another thread...).
I don't think it's GDDRIII, though, either, which ATI has stated it wants to use precisely because of some of the latency issues involved with DDRII. But it may well have better latencies--but neither of us knows, of course.
Now this is a brief statement of disagreement. I still don't know why you felt the majority of the first half of your reply was necessary.
One of my constant failings has been verbosity, since my highschool days (quite a few moons ago.) *chuckle* I'm not as bad as I used to be, though.....
(Believe it or not.)
To reply briefly in turn, I also don't think the nv35 will successfully compete with the R400, and I think nVidia has been focusing on getting the nv35 ready as soon as possible for quite a while (since the 9700 launch atleast). I think it is the best glass of lemonade they can make from the situation, and I think they are preparing it as fast as they can.
I'm not going to try and dissuade you at all about your belief on when they are going to deliver it (because I don't have any strong opinion that it is wrong), but I do find issue with your idea of "nv35 can't arrive soon because the nv30 just arrived."
No, but I just think there are deeper problems with nv30 than just bandwidth. In fact, it may well be some of those problems which have caused nVidia to can the Ultra--we won't ever really know, probably. (Maybe some of this will come to light when the 5800's actually start to ship.) That's why I think it unlikely in the extreme that the cancellation of nv30 Ultra and "nv35" have any sort of linkage whatever, apart from the basic nVidia in-house nomenclature that has "nv35" following "nv30" at some future date.
Now that I understand your position much better I would ask that you forgive my stridency if it appeared that way--I think I've just had it with the rumor mill and the incessant propaganda and all of the rest...! Not that you were intending to engage in any of it--but I suppose I was more inclined to see it that way than I might have been at another time. Really, I suppose there is little we might disagree on here.
And I'm really tired...
Good night!....