Are you ready ? (Nv 30 256 bit bus ?)

Vince said:
Only after talking with Dave did I state that I can see why an advanced devlopment goup at Nvidia would rely on DDRII and not a "One time 2X bang" at a cost that a 256-bit bus provides when DDR or DDRII provides enough bandwith.
That "one time 2X bang" works as well with DDRII memory as it does with DDRI.
Um, or could nvidia have just seen that DDRII is the future of memory, and as their history will show, they ALLWAYS jump on the bleeding-edge bandwagon. If not with DDR, then pushing Lithography to the extreme (0.22, 0.18, 0.15, and now 0.13um).
In that case, I guess they didn't jump on the "256-bit memory bus is the future of memory busses" bleeding-edge bandwagon.
 
Even nVidia believes in 256 bit as the future so why in hell can anyone actually nit-pick on ATI offering this now?

David Kirk:

We’ll move to 256bit when we feel that the cost and performance balance is right.

Maybe, ATI felt that the cost and performance balance was right for them with the 9700.
 
SirPauly said:
Even nVidia believes in 256 bit as the future so why in hell can anyone actually nit-pick on ATI offering this now?

David Kirk:

We’ll move to 256bit when we feel that the cost and performance balance is right.

Maybe, ATI felt that the cost and performance balance was right for them with the 9700.

nVidia seems pretty rattled by the R300. The comments by Kirk in the recent interviews and especially now on the nVidia site are direct shots aimed at ATI and the 9700--the most direct shots I can remember the company making, even when talking about 3dfx. I especially thought his "128-bit precision is great" but "96-bit precision is a half-measure" comments were pretty funny...;) *chuckle* (As if an end user playing a 3D game could ever tell the difference between them.) And I also thought it was funny to see him squirm on the memory bandwidth hook--Heh, Heh, looks like to me nVidia would have to run 1.24GHz DDRII just to match ATI's 640MHz DDR1 bandwidth, because ATI's bus is 2x the width. I have a hard time seeing nv30's produced on any kind of scale with 1GHz DDR2 anytime soon but even more than that I cannot imagine right now what that would cost...! How attractive is a $599 nv30 going to be? Especially if it's not seen as compelling compared with the 9700 Pro? I just have a feeling nVidia's decision to latch on to the "latest" in chip manufacturing processes and the latest in DDR II ram technology are going to cost it a bundle in light of what ATI's done with existing technologies in the current R300 product line.
 
WaltC said:
SirPauly said:
Even nVidia believes in 256 bit as the future so why in hell can anyone actually nit-pick on ATI offering this now?

David Kirk:

We’ll move to 256bit when we feel that the cost and performance balance is right.

Maybe, ATI felt that the cost and performance balance was right for them with the 9700.

nVidia seems pretty rattled by the R300. The comments by Kirk in the recent interviews and especially now on the nVidia site are direct shots aimed at ATI and the 9700--the most direct shots I can remember the company making, even when talking about 3dfx. I especially thought his "128-bit precision is great" but "96-bit precision is a half-measure" comments were pretty funny...;) *chuckle* (As if an end user playing a 3D game could ever tell the difference between them.) And I also thought it was funny to see him squirm on the memory bandwidth hook--Heh, Heh, looks like to me nVidia would have to run 1.24GHz DDRII just to match ATI's 640MHz DDR1 bandwidth, because ATI's bus is 2x the width. I have a hard time seeing nv30's produced on any kind of scale with 1GHz DDR2 anytime soon but even more than that I cannot imagine right now what that would cost...! How attractive is a $599 nv30 going to be? Especially if it's not seen as compelling compared with the 9700 Pro? I just have a feeling nVidia's decision to latch on to the "latest" in chip manufacturing processes and the latest in DDR II ram technology are going to cost it a bundle in light of what ATI's done with existing technologies in the current R300 product line.
 
Testiculus Giganticus said:
<snip>
Suppose it doesn't? Doesn't what?
- Cost $600?
- Compare favorably to the R300?
- Doesn't use DDRII?
- Doesn't use DDRII clocked at 500mhz?
- Doesn't use a 128-bit bus?

If you're going to leave it open to interpretation, then I read it as "Suppose it doesn't compare favorably to the R300?".
 
And I also thought it was funny to see him squirm on the memory bandwidth hook--Heh, Heh, looks like to me nVidia would have to run 1.24GHz DDRII just to match ATI's 640MHz DDR1 bandwidth, because ATI's bus is 2x the width.

If Nvidia felt that they neeeded the 256-bit bus on NV30, then I think it is safe to say that they would have used it(if they didn't that is). Though it might be only speculation until next week, NV30's effective bandwidth will more than likely exceed R300's due to some type of bandwidth saving technique and memory speed will not matter for Nvidia as much as it will for Ati. My guess is that David Kirk doesn't want to let any detailed info slip at this point.
 
I have my doubts. Also, you have to calculate the "effective bandwidth" of the R300 as well. Is Hyper-Z + Color Compression + 256-bit bus@310Mhz on R300 > NV30 efficiency algorithms + 128-bit bus@500Mhz?

It would be unfair to compare NV30's effective bandwidth against R300's real bandwidth.
 
Johnathan256 said:
And I also thought it was funny to see him squirm on the memory bandwidth hook--Heh, Heh, looks like to me nVidia would have to run 1.24GHz DDRII just to match ATI's 640MHz DDR1 bandwidth, because ATI's bus is 2x the width.

If Nvidia felt that they neeeded the 256-bit bus on NV30, then I think it is safe to say that they would have used it(if they didn't that is). Though it might be only speculation until next week, NV30's effective bandwidth will more than likely exceed R300's due to some type of bandwidth saving technique and memory speed will not matter for Nvidia as much as it will for Ati. My guess is that David Kirk doesn't want to let any detailed info slip at this point.

Ah! A true believer...... and just what makes you think this? Just because nVidia says so? I think we will soon learn the truth, and many are setting themselves up for major disappointment. I hope you are right, but I fear that the NV30s not going to be much faster (if at all) that the 9700.....
 
Johnathan256 said:
And I also thought it was funny to see him squirm on the memory bandwidth hook--Heh, Heh, looks like to me nVidia would have to run 1.24GHz DDRII just to match ATI's 640MHz DDR1 bandwidth, because ATI's bus is 2x the width.

If Nvidia felt that they neeeded the 256-bit bus on NV30, then I think it is safe to say that they would have used it(if they didn't that is). Though it might be only speculation until next week, NV30's effective bandwidth will more than likely exceed R300's due to some type of bandwidth saving technique and memory speed will not matter for Nvidia as much as it will for Ati. My guess is that David Kirk doesn't want to let any detailed info slip at this point.

It's funny to see this kinda optimism... (roll up to the top of this page or go back to the first page and check again what Kirk said - IMHO that meansclearly 128-bit, nothing else.)
Please, explain why "NV30's effective bandwidth will more than likely exceed R300's due to some type of bandwidth saving technique and memory speed will not matter for Nvidia as much as it will for Ati. "

I don't see your points agains ATI's "more than likely" doubled bandwidth together w/ their the same, high-level bandwidth-saving techniques, optimizations - did you forget the HyperZ III, for example?
 
Doesn't the real question here come down to "How does NV30 implement antialiasing?". Since that's the only time that ATI's raw bandwidth advantage is going to have a significant impact.
If it's brute force multisampling, I think it'll be difficult for NV30 to match R300, if it's a different approach all bets are off.
 
Johnathan256 said:
If Nvidia felt that they neeeded the 256-bit bus on NV30, then I think it is safe to say that they would have used it(if they didn't that is). Though it might be only speculation until next week, NV30's effective bandwidth will more than likely exceed R300's due to some type of bandwidth saving technique and memory speed will not matter for Nvidia as much as it will for Ati. My guess is that David Kirk doesn't want to let any detailed info slip at this point.

You are forgetting that aside from the raw bandwidth numbers, ATI is also using "bandwidth reducing" technologies, but that's really beside the point. Everyone "needs" a 256-bit bus, but not everyone wants to pay for one, is the problem. They cost more because of using more pins--but still I doubt the cost is prohibitive (obviously.) What nVidia's doing is clear--they want the fastest off-the-shelf components (read DDR II) to drive their chip on the least expensive PCB they can design as a reference for OEMs. There's nothing wrong with that except and unless a competitor comes along and decides to invest some time in designing a 256-bit bus and can include it at a price point relative to nVidia's 128-bit products--at that point I'd say it would be a problem from a competitive viewpoint.

As far as "letting things slip," it seems that Kirk has let an awful lot of things slip lately...;) Like making it crystal clear nv30 is using a 128-bit memory bus, will boast a 128-bit color pipeline and declare that ATI "only" has a 96-bit color pipeline (and try to make a difference out of it), and declaring that its shader pipelines can use "thousands" of instructions while the R300 is limited to much less, etc. While I think it's possible nVidia might include some sort of bandwidth-saving technique I don't look for it to be any more earth-shattering than ATI's in that regard, although I do expect that nVidia will market it with the expected hyperbole.

Basically, it sounds to me that Kirk knows full well that because of the 9700 Pro the nv30 won't have near the impact it would have had in a competition-free environment (which I honestly think nVidia thought it would have.) Therefore, he's already started nitpicking long before the nv30 will ship. I mean, frankly, I can't see what good nVidia thinks "thousands of shader instructions" will be for DX9 software, since the R300 supports the official limit of instructions allowed by DX9. Maybe in custom software? But all told these strike me as extremely weak defenses to be making about a product which is a few months from shipping. And his comments concerning the difference between 96-bit and 128-bits in the color pipeline are not worth repeating. Ditto his comments on DDR II--because ATI can use it as well and still maintain a 256-bit bus--or does Kirk think people will be so enamored of the ram's clock they will forget the width of the bus...?

Any way you slice it, though, for high color depths and FSAA and AF, you need *real* bandwidth. Not assumed, effective and guesstimated bandwidth (which is only good for marketing.) And right now based on what Kirk has let slip, it sure looks like ATI is to remain the bandwidth king for the foreseeable future.

I know of course that I could be all wrong here and that nVidia could have some other-than-brute-force technology up its sleeve--be that as it may, I just don't believe that to be the case, however. nVidia has little experience doing anything other than brute force--indeed their entire GPU line to date has been brute force based. 3dfx didn't really give them much, and I recall that at the time nVidia said it would never use any of the GigaPixel technology as it felt it already had better alternatives on the burner. I guess we'll all know in a couple of weeks or less...;)
 
Yes, all of you make excellent points and I respect each and every one. I base my views primarily on Nvidia's reputation. Why would Nvidia not simply add more pipelines, FP, and a 256-bus to the basic Geforce-4 design unless they had something better in mind? And if Nvidia thought that the wider bus was neccessary, why not use it? They have more money than any competitor so they could surely implement it! Everything we already know points to an a design which uses elegance rather than brute force. Of course I could be wrong and this is only an opinion but ask yourselves, as far as the memory bus is concerned, why would Nvidia not opt for 256-bit without something else up their sleeves? Wouldn't they be setting themselves up for 2nd place?
 
Johnathan256 said:
Yes, all of you make excellent points and I respect each and every one. I base my views primarily on Nvidia's reputation. Why would Nvidia not simply add more pipelines, FP, and a 256-bus to the basic Geforce-4 design unless they had something better in mind? And if Nvidia thought that the wider bus was neccessary, why not use it? They have more money than any competitor so they could surely implement it! Everything we already know points to an a design which uses elegance rather than brute force. Of course I could be wrong and this is only an opinion but ask yourselves, as far as the memory bus is concerned, why would Nvidia not opt for 256-bit without something else up their sleeves? Wouldn't they be setting themselves up for 2nd place?

I personally think that nVidia never saw the R300 coming. They got blindsided! They never thought ATI could do it.
 
Back
Top