Latest NV30 Info

256-bit bus makes the PCB more expensive.

Bottom line:
Why make a 256-bit bus when you can pretty much reach (or even exceed) the perfomance of your closest competitor while making your card cheaper or about the same price?

Who needs the most revolutionary part out there, priced at 500-600$? Who would buy it?

If NV30 is really as good as NVIDIA claims it to be, the 128-bit bus will be justified.
 
Why make a 256-bit bus when you can pretty much reach (or even exceed) the perfomance of your closest competitor while making your card cheaper or about the same price?

Who said that using bleeding edge DDR-II ram on a 128 bit bus maked for a cheaper product than using slower and more mainstream DDR Ram on a 256 bit bus?

What will eventually be telling as far as "architectural sophistication" goes is this...assuming NV30 is 128 bit, DDR-II Ram:

* Clock both cores and memory for NV30 and R300 at the same speed, and see how the performance compares.

Because we can assume that at some point, nVidia will add 256 bit bus to their arsenal, and we know ATI will add fast DDR-II....
 
Who said that using bleeding edge DDR-II ram on a 128 bit bus is cheaper than using slower and more mainstream DDR Ram on a 256 bit bus?

Who said it isn't? :rolleyes:

It's certainly better than using bleeding edge DDR-II ram on a 256-bit bus! Especially since the difference wouldn't be that great in our CPU limited world!
 
alexsok said:
Why make a 256-bit bus when you can pretty much reach (or even exceed) the perfomance of your closest competitor while making your card cheaper or about the same price?

Why do you assume that a 128-bit bus with 800+ MHz DDR-II memory, and perhaps an even more complicated 8-way memory controller, would be significantly cheaper than a 4-way memory controller and ~600MHz DDR memory on a 256-bit bus? The costs are a trade-off.


Personally, I don't think the NV30 needs a 256-bit memory bus. Actually, with the 16 way crossbar memory controller, embedded dram, 1GHz DDR-II memory, 3dfx "Rampage" technology, Gigapixel deferred rendering technology, and a 600% increase in "effective" bandwidth, I think they will probably go back to a 64-bit bus and still blow the doors off the 9700.
 
Who said it isn't?

You certainly implied it! (See Bigus' response as well.) You said:

Why make a 256-bit bus when you can pretty much reach (or even exceed) the perfomance of your closest competitor while making your card cheaper or about the same price?

What is your implication, if not that nVidia could reach R300 performance, at the same or less cost by virture of 128 bit bus and whatever memory and other techniques they'll use?
 
Why do you assume that a 128-bit bus with 800+ MHz DDR-II memory, and perhaps an even more complicated 8-way memory controller, would be significantly cheaper than a 4-way memory controller and ~600MHz DDR memory on a 256-bit bus?

O.k, point proven by both of you, but who said there is a clear winner between the two approaches? NVIDIA and ATI chose different routes to follow, that's all.
 
O.k, point proven by both of you, but who said there is a clear winner between the two approaches?

Um...no one did. :) We won't have such an answer until we see NV30...and even then, the answer will be muddled because we won't know the true cost of each of the parts. Selling price != cost.
 
I don't understand your point...

O.k, let's start a clean sheet of paper:

I imply that there is no point for NVIDIA to use 256-bit bus for NV30 if NV30 is as good as they say (all the bandwidth saving techniques, etc...). I think that by utilizing 128-bit bus and bleeding edge DDR-II memory, they can exceed R300's perfomance, while maintaning a similar price tag, both from the chip's price standpoint and the retail and OEM prices.

What I'm really trying to say here is that both ATI & NVIDIA chose different paths to follow. ATI chose the 256-bit with DDR path, while NVIDIA chose the 128-bit with DDR-II path. We don't know which path turned out to be the best and most effective yet, so I'll reserve further judgement until we do...
 
I imply that there is no point for NVIDIA to use 256-bit bus for NV30 if NV30 is as good as they say (all the bandwidth saving techniques, etc...).

OK, we agree on that.

Just a couple comments on that though:

1) The "success" of a part is not just performance and price, it's also compatibility. Typically, the more "aggressive" the memory saving techniques are, the more chances there are for incompatibilities. (Rendering anomolies, etc.)

Brute force, while less elegant and can be more expensive, is typically less problematic. If NV30 uses some really exotic techniques, it will be intersting to see how well they manage on the compatibility front.

2) It's already possible that despite the rumors of it being TSMC's "fault", is that part of the lateness of NV30 is due to troubleshooting the design of bleeding edge memory interfaces or bandwidth savings architectures. If that's the case, then there is additional "cost" for the NV30 due to additional development cost / time.

both from the chip's price standpoint and the retail and OEM prices.

Well, as I hinted above, prices only give a loose representation of cost, which is what we are really after. So it will be hard to really give a definitive answer for the "better solution". The competitive market will dictate the price of these cards, and therefore the profit margins.

In other words, it might cost $5 to make an NV30 borad, or $275. ;) If it performs "on par" with R300, it will sell at about the same price.
 
Radeon 8500 had more bandwidth gf3ti500 but it didn't outperform it and with the 40.xx detonators gf3 got more speed.Efficiency of your design plays a major part too.Nvidia might have some new tech for memory,we just don't know,having a 256bit bus or not just doesn't say much about the real performance of a card.(Parhelia)
Don't flame me I'm not an nvidiot nor a fanatic.Yes radeon9700 is simply an amazing card but nvidia is no small company too and we shouldn't make assumptions with half known specs.Just my 1.9999 cents!!!
 
1) The "success" of a part is not just performance and price, it's also compatibility. Typically, the more "aggressive" the memory saving techniques are, the more chances there are for incompatibilities. (Rendering anomolies, etc.)

I thinak thats a bit of a reach there Joe. HAving agressive memory saving tech does not automatically equate to compatibility issues - that would be governed far more by the quality of drivers, quality of hardware and whether its enabled full time.
 
Prometheus said:
Radeon 8500 had more bandwidth gf3ti500 but it didn't outperform it and with the 40.xx detonators gf3 got more speed.

Just for records: at its release time, 8500 did not outperform GF3ti500, you're right.
But you mentioned the new Det40-series - even with or without it, GF3 was beated LONG TIME AGO by the 8500.
Currently 8500's perform quite similar like the GF4Ti4200... :)
 
T2k said:
Prometheus said:
Radeon 8500 had more bandwidth gf3ti500 but it didn't outperform it and with the 40.xx detonators gf3 got more speed.

Just for records: at its release time, 8500 did not outperform GF3ti500, you're right.
But you mentioned the new Det40-series - even with or without it, GF3 was beated LONG TIME AGO by the 8500.
Currently 8500's perform quite similar like the GF4Ti4200... :)
'

that is until you enable Crapvision(TM) ;)
 
I thinak thats a bit of a reach there Joe. HAving agressive memory saving tech does not automatically equate to compatibility issues - that would be governed far more by the quality of drivers, quality of hardware and whether its enabled full time.

Hmmm...you seemed to contradict yourself there. ;)

Specifically: "Whether it's enabled full time." In other words, the more "aggressive" you are with bandwidth savings techniques (such as enabling it all time, vs. certain circumstances), the more chances for incompatibilities. If you don't enable something "full time", then it is not as aggressive, by definition, as enabling it full time. You don't reap the full benefits.

I never said that more aggressive techniques "automatically equates" to sompatibility issues. However, as a general rule, the more aggressive you are, the more steps you have to take to ensure compatibility. (With, for example, driver hacks, etc.)

Think of data compression in general. The more aggressive (lossy) the compression, the worse the quality of the result.

Please don't misunderstand...I'm not saying that if NV30 uses some advanced and aggressive techniques, that means there will be compatibility problems. I'm more implying that if they do have such techniques, there is a higher probability of it.
 
Radeon 8500 had more bandwidth gf3ti500 but it didn't outperform it and with the 40.xx detonators gf3 got more speed.Efficiency of your design plays a major part too.

Of course. As T2K mentioned, Radeon 8500 does currently beat the GeForce3 clock for clock right now. I believe it falls right between the GeForce3 and the GeForce4 Ti in terms of performance per bandwidth.

Interestingly, I would say the 9500 Pro beats the GeForce4 Ti in terms of performance per bandwidth. It performs about on par in non AA / Aniso situations, and outperforms it with AA / Aniso. It's not quite apples to apples, because the 9500 is 8x1, while the GeForce is 4x2. It'll be interesting to see how the 9500 non pro (4x1) performs.

Point being, ATI's memory interface / bandwidth savings techniques have improved considerably from the last generation. ATI just didn't count on the 256 bit bus to do all the work.
 
Specifically: "Whether it's enabled full time." In other words, the more "aggressive" you are with bandwidth savings techniques (such as enabling it all time, vs. certain circumstances), the more chances for incompatibilities. If you don't enable something "full time", then it is not as aggressive, by definition, as enabling it full time. You don't reap the full benefits.

Is 9700’s colour compression enabled full time, or just when FSAA is enabled?

Think of data compression in general. The more aggressive (lossy) the compression, the worse the quality of the result.

Spurious analogy since all the memory saving techniques so far a lossless. You can apply that analogy if someone decides to do something like MPEG compression.

I'm more implying that if they do have such techniques, there is a higher probability of it

I’d say that there more room for incompatibilities not because of the techniques applied (if they are applied correctly) but because it just makes the entire chip more complex.
 
Think of data compression in general. The more aggressive (lossy) the compression, the worse the quality of the result.

Sorta like a 700 MB PSX-CD image file that I had zipped into 1.5 MBs (yes it's true, most of the size was just crap, dummy files)
no loss there ;)
 
Is 9700’s colour compression enabled full time, or just when FSAA is enabled?

Actually, I don't recall. (Was that a rhetorical question that I'm supposed to know?) ;)

Spurious analogy since all the memory saving techniques so far a lossless.

Well, all compression schemes (save DXTC) are lossless. All memory savings techniques are not about data compression though, correct? (Occlusion culling....) The anology was not meant to be taken literally.

For example...remember those 3dfx "HSR Drivers?" Didn't they have some type of "slider" (or registry setting) to adjust the aggressiveness of the algorithm used? The more aggressive, the faster the performance, but the more artifacts?

I'd say that there more room for incompatibilities not because of the techniques applied (if they are applied correctly) but because it just makes the entire chip more complex.

Sure, a possibility. (Which I alluded to earlier with the lateness of the NV30 possibly not being entirely because of TSMC issues...)
 
Back
Top