Are you ready ? (Nv 30 256 bit bus ?)

I personally think that nVidia never saw the R300 coming. They got blindsided! They never thought ATI could do it.

Yes, I would have to agree. I think Nvidia new Ati was working on something but that they didn't expect in to beat Geforce4 in terms of performance. I think this might be one reason that NV30 was delayed. The rumor of Nvidia adding 2 more pipelines to the NV30 supports this.
 
Johnathan256 said:
I think Nvidia new Ati was working on something but that they didn't expect in to beat Geforce4 in terms of performance.
That would be a very silly thing on nvidia's part: Why would ATi produce a new part that was slower that the competition's current offerings?
 
The memory bandwidth wall that sometimes gets talked about is not really a wall but more like a gradual upward slope.

As I mentioned before, chip performance is primarily driven by transistor count and frequency. These then set the memory bandwidth requirements. If somehow a chip could suddenly be designed with twice the frequency, hardware designers would utilize some combination of available memory bandwidth technologies to meet the requirements. If they didn't, all that frequency would be wasted and the design wouldn't make much sense.

Each time a memory bandwidth technology is put into use, you move up the slope a little further, and things get a little more difficult the next cycle. The trick is to find the most gradual way up the slope. So at one time you might choose a crossbar controller, or compress the frame buffer, or use a tiled architecture, or put more effort into an on chip cache, or use edram, or increase the bus width, or use higher frequency memory, or some combination of these, etc.

What you don't use is as important as what you do, since it means you've met your current memory bandwidth needs while keeping costs down and the unused options open for the next cycle.
 
That would be a very silly thing on nvidia's part: Why would ATi produce a new part that was slower that the competition's current offerings?

Well I'll admit that I could have worded that better. What I meant was that Nvidia most likely didn't expect R300 to beat Geforce4 by such a wide margin. And believe what you will, Ati's products have performed mostly subpar in the past(mainly due to driver problems). For an example, Radeon 8500 failed to outperform Geforce3 Ti500 in a majority of benchmarks upon its release(thought it was a better design in my opinion; weak drivers were the main problem). The original Radeon had similar problems.
 
Johnathan256 said:
And believe what you will, Ati's products have performed mostly subpar in the past(mainly due to driver problems).
You should never rely on your competitor's mistakes in order to make your product/company a success... Your competitor might actually learn from those mistakes someday ;)
 
You should never rely on your competitor's mistakes in order to make your product/company a success... Your competitor might actually learn from those mistakes someday

How true that is...

That would be a very silly thing on nvidia's part: Why would ATi produce a new part that was slower that the competition's current offerings?

That is kinda funny considering the debates that have been going on in here. Why would Nvidia produce a new part (NV30) that was slower that the competition's current offering (R300)? A very ironic post indeed...
 
OpenGL guy said:
Johnathan256 said:
I think Nvidia new Ati was working on something but that they didn't expect in to beat Geforce4 in terms of performance.
That would be a very silly thing on nvidia's part: Why would ATi produce a new part that was slower that the competition's current offerings?

I think he might mean here, and if so I agree, that nVidia was so cocky considering ATI's past efforts that it considered the 9700 Pro would be a product of about GF4 Ti4600 power.

On several of ATI's past new-card releases, nVidia has released a "new" driver set for its current flagship product at the time (or competitive niche product)--sometimes on the same day it seemed like--which invariably bested the just-released new ATI product (at least in popular 3D benchmarks.) I've witnessed this more than once and frankly thought ill of nVidia for doing it--but business is business and competition makes it happen.

Note that this time nVidia did *exactly* the same thing. Shortly after the 9700 Pro started shipping (I can only remember approximately here), nVidia put out its traditional "ATI new-product stomping driver set"....*chuckle*...the 40.xx (beta) Detonators. I was still using my GF4 TI4600 at the time and installed them--they were so buggy that I quickly uninstalled them and reinstalled the last stable drivers, the 30.82s.

If the 9700 Pro release had been a mirror of the past the new Detonator driver set would have propelled the Ti4600 to a speed 15%-40% greater than the 9700 Pro could manage at its introduction, depending on the benchmark. Note that this particular Detonator release seemed to accomplish only one thing reasonably well, and that was to add >1K points to the Ti4600's 3D Mark xx scores....;) And...I think this was nVidia's only intent when releasing these drivers (the 40.42 beta Detonators were so buggy, in fact, that it prompted Epic to mention them in its 'read.me' section for UT2K3--the demo and the game itself--and it wasn't a positive mention, but rather a warning from Epic for all nVidia users to steer clear of the 40.42 beta Detonators and drop back a set for the time being and wait for better drivers.)

So basically, at least it appears that way to me circumstantially, nVidia thought the 9700 Pro release would be "ho-hum" and "business as usual" and that it would handle the release just like it has several preceding ATI releases, and release a new driver set for its currently shipping product that propels it beyond what the brand-new ATI product can score on its introduction.

*chuckle* As everybody saw, nVidia's little scheme didn't even come close to working--this time...;) And the whole thing was rather embarrassing for nVidia, I should think. That's one reason why I think the 9700 Pro product release caught nVidia by complete surprise. If nVidia had had an inkling of the true power of this product I don't think it would have bothered releasing a buggy, beta driver set just to slightly inflate some benchmark results--I think they would have known the futility of that before they tried it. I've said it before...if I was ATI I would much prefer being underestimated to being over estimated....;) Much...;) That makes product releases like the 9700 Pro that much sweeter, I should think.
 
I for one do not believe that Nvidia moved Nv30 from 6 pipelines to 8 pipelines. It just doesn't make sense. would it not take quite some time to change a GPU core? I believe NV30 has been 8 pipes all along, however in the additional time it's taking for Nv30 to arrive, I'm sure Nvidia has been tweaking that sucker to no end, and making the drivers to be stellar.

that'd said, the next Nvidia card I get will be something with 256-bit bus, which one almosts knows Nvidia will move to, after Nv30.
 
That would be a very silly thing on nvidia's part: Why would ATi produce a new part that was slower that the competition's current offerings?

That is kinda funny considering the debates that have been going on in here. Why would Nvidia produce a new part (NV30) that was slower that the competition's current offering (R300)? A very ironic post indeed...

Well, if all you have IS a slower or about the same performing new product, just what do you do? Maybe try to do everything you can to get your product to have a bit more performance while holding off introducing it.... maybe even missing a product cycle....... :rolleyes:
 
In the newly posted interview said:
In the past few generations of NVIDIA’s products, both bandwidth and computation affect performance. Overclocking either memory speed or core speed improves performance, so that means that sometimes rendering is limited by memory bandwidth, and other times rendering is limited by pipeline processing power. To truly move to the next level in terms of performance and features, we’ll need to increase both. A wider memory interface is one way to increase bandwidth, but there are other ways. As you mention, DDR2 is another way of building a higher throughput memory system. I think that DDR2 is going to be really exciting for the graphics community, since it brings the potential of more memory bandwidth per signal pin. This is a good trend, regardless of how many bits wide the datapath is!

There are costs associated with both increasing the datapath width and increasing the computational core. If you look at the new programmable features on the OpenGL and DirectX, you’ll see that a lot more floating point math is required, which requires both bandwidth and computation growth. Both of these will be increased in the next generation. We’ll move to 256bit when we feel that the cost and performance balance is right.

Wow, isn't this what I've been saying on and on? That when nVidia's advanced design teams sat down they went for the bandwith solution with a future (not a 1-time shot) that would provide for their architectures needs... You can basically look on pages 3,4,5 for this :)

All that argument and yet it accomplished nothing outside of a smile.

PS. Why do people seem to just brush over SA's excellent posts?
 
nVidia knew that R300 would be a great product. When did they know it? When John Carmack and iD chose to run the Doom III E3 demo on Ati hardware. I think at that point, those truly understanding souls at nVidia started to worry. Whether or not the higher-ups calling the shots in the company wanted to admit it then is another issue. And what their response has been since is the key issue now.
 
Vince said:
In the newly posted interview said:
In the past few generations of NVIDIA’s products, both bandwidth and computation affect performance. Overclocking either memory speed or core speed improves performance, so that means that sometimes rendering is limited by memory bandwidth, and other times rendering is limited by pipeline processing power. To truly move to the next level in terms of performance and features, we’ll need to increase both. A wider memory interface is one way to increase bandwidth, but there are other ways. As you mention, DDR2 is another way of building a higher throughput memory system. I think that DDR2 is going to be really exciting for the graphics community, since it brings the potential of more memory bandwidth per signal pin. This is a good trend, regardless of how many bits wide the datapath is!

There are costs associated with both increasing the datapath width and increasing the computational core. If you look at the new programmable features on the OpenGL and DirectX, you’ll see that a lot more floating point math is required, which requires both bandwidth and computation growth. Both of these will be increased in the next generation. We’ll move to 256bit when we feel that the cost and performance balance is right.

Wow, isn't this what I've been saying on and on? That when nVidia's advanced design teams sat down they went for the bandwith solution with a future (not a 1-time shot) that would provide for their architectures needs... You can basically look on pages 3,4,5 for this :)

And this is not true. Take a look on those pages.
 
DadUM said:
nVidia knew that R300 would be a great product. When did they know it? When John Carmack and iD chose to run the Doom III E3 demo on Ati hardware. I think at that point, those truly understanding souls at nVidia started to worry. Whether or not the higher-ups calling the shots in the company wanted to admit it then is another issue. And what their response has been since is the key issue now.

Yes or no - if so, that was too late.
 
OpenGL Guy, I think you are holding a double standard there. Saying something like... "That would be a very silly thing on nvidia's part: Why would ATi produce a new part that was slower that the competition's current offerings?" Yet, whenever someone mentions the thought of the NV30 being faster than the 9700 pro, you complain... "Why does everyone assume the NV30 will be faster?" I ask you the same question then... "Why would Nvidia produce a new part that is slower than the competition's current offerings?"
 
WaltC said:
I think he might mean here, and if so I agree, that nVidia was so cocky considering ATI's past efforts that it considered the 9700 Pro would be a product of about GF4 Ti4600 power.

On several of ATI's past new-card releases, nVidia has released a "new" driver set for its current flagship product at the time (or competitive niche product)--sometimes on the same day it seemed like--which invariably bested the just-released new ATI product (at least in popular 3D benchmarks.) I've witnessed this more than once and frankly thought ill of nVidia for doing it--but business is business and competition makes it happen.

Note that this time nVidia did *exactly* the same thing. Shortly after the 9700 Pro started shipping (I can only remember approximately here), nVidia put out its traditional "ATI new-product stomping driver set"....*chuckle*...the 40.xx (beta) Detonators. I was still using my GF4 TI4600 at the time and installed them--they were so buggy that I quickly uninstalled them and reinstalled the last stable drivers, the 30.82s.

If the 9700 Pro release had been a mirror of the past the new Detonator driver set would have propelled the Ti4600 to a speed 15%-40% greater than the 9700 Pro could manage at its introduction, depending on the benchmark. Note that this particular Detonator release seemed to accomplish only one thing reasonably well, and that was to add >1K points to the Ti4600's 3D Mark xx scores....;) And...I think this was nVidia's only intent when releasing these drivers (the 40.42 beta Detonators were so buggy, in fact, that it prompted Epic to mention them in its 'read.me' section for UT2K3--the demo and the game itself--and it wasn't a positive mention, but rather a warning from Epic for all nVidia users to steer clear of the 40.42 beta Detonators and drop back a set for the time being and wait for better drivers.)

So basically, at least it appears that way to me circumstantially, nVidia thought the 9700 Pro release would be "ho-hum" and "business as usual" and that it would handle the release just like it has several preceding ATI releases, and release a new driver set for its currently shipping product that propels it beyond what the brand-new ATI product can score on its introduction.

*chuckle* As everybody saw, nVidia's little scheme didn't even come close to working--this time...;) And the whole thing was rather embarrassing for nVidia, I should think. That's one reason why I think the 9700 Pro product release caught nVidia by complete surprise. If nVidia had had an inkling of the true power of this product I don't think it would have bothered releasing a buggy, beta driver set just to slightly inflate some benchmark results--I think they would have known the futility of that before they tried it. I've said it before...if I was ATI I would much prefer being underestimated to being over estimated....;) Much...;) That makes product releases like the 9700 Pro that much sweeter, I should think.

Walt, while you do make several good points, don't you think its wee bit too early to basicly say "I told you so"?
 
bdmosky said:
OpenGL Guy, I think you are holding a double standard there. Saying something like... "That would be a very silly thing on nvidia's part: Why would ATi produce a new part that was slower that the competition's current offerings?" Yet, whenever someone mentions the thought of the NV30 being faster than the 9700 pro, you complain... "Why does everyone assume the NV30 will be faster?" I ask you the same question then... "Why would Nvidia produce a new part that is slower than the competition's current offerings?"
The point is that we have no evidence either way. When I ask a question such as "Why does everyone assume the NV30 will be faster?" I know there are no facts to back it up. When I make a statement such as "It would be silly for nvidia to produce a part that was slower than the competitions'" that's a statement of fact, as I see it.
 
OpenGL guy said:
The point is that we have no evidence either way. When I ask a question such as "Why does everyone assume the NV30 will be faster?" I know there are no facts to back it up. When I make a statement such as "It would be silly for nvidia to produce a part that was slower than the competitions'" that's a statement of fact, as I see it.
I'm going to nitpick ;) If it is silly for Nvidia to produce a part slower than the competition, then we can assume that it will be faster :p
 
Evildeus said:
OpenGL guy said:
The point is that we have no evidence either way. When I ask a question such as "Why does everyone assume the NV30 will be faster?" I know there are no facts to back it up. When I make a statement such as "It would be silly for nvidia to produce a part that was slower than the competitions'" that's a statement of fact, as I see it.
I'm going to nitpick ;) If it is silly for Nvidia to produce a part slower than the competition, then we can assume that it will be faster :p

Lets all get on the same page here, guys.... Bottom line, ATI could have gone out to buy a GF4 in the last 6 monts BEFORE the 9700 came out. Until last September, nVidia had no way to do that with the 9700. I'm sure that nVidia wasn't ready for what the 9700 has...... NO ONE WAS! Remember, before the 9700 was out, everyone was saying how hard it was to get that large a chip to scale past even 275, who would have thought they could get 325?

After the past introductions of the original Radeon & the 8500, I'm sure ATI knew they had to make a quantum leap in performance.....isn't that why the bought Artx? Isn't the R300 core designed using knowledge gained from that purchase?

Now, lets look at nVidia. While nVidia has produced a lot of cards, isn't it true that , from the original TNT to todays GF4TI's, that they are all using the same basic architecture? Yes, they have added a bunch of things, T&L, more pipelines much better memory controllers.....etc. nVidia said(with their typical arrogance), when the bought what was left of 3DFX, that there was little that they could use and that they felt they had much better ways to acheive their goals. Why use wider memory interface when DDr(or DDr II now) memory can do the same. They have always used bleeding edge technology to keep ahead of the pack when the need arose that they were acually being pushed( which hasn't happened in 3 years!) If not being pushed, they were happy to introduce products that were better than the previous generation, but not anything like a quantum leap - there was no need! DX9 has made them create a new architecture, but - again with their typical arrogance - they never saw the R300 coming, never expected it to be that quantum leap in performance. They have used their tried and true method of bleeding edge technology, and it has finally bit them in the butt.... maybe! While the delays have been blaimed on the .13 die - everyone but Intel has had teething problem with it - doesn't anyone else think this is only part of the story? R300 blindsided nVidia, they never saw it coming. I'm sure nVidia has taken the missed product cycle & tried everything they can to increase the performance of the NV30. It's not enough for nVidia to be as fast as the 9700, it's got to be faster, or nVidia's reputation will suffer bigtime! But, you can only do so much to add performance to a design once it's very late in the design.

So, this is what I expect we will see in January or Febuary. NV30 will be faster than the 9700 in SOME benchmarks, slower in others. Taken as a whole, I think the cards will be very competitive with each other. nVidia will PR push the differences big time, because that's all they have. They will take the low road with PR, as they did when in the heat of battle with 3DFX, just wait & see! Well, it's already happening with this David Kirk interview. And all we will see on the 18th is a paper introduction. There will be cards there, but no benchmarks. There will be plenty of really nice demos..... but there will be no real meat & potatoes to the launch.

But, as Dennis Miller says after a rant...... I could be wrong! :LOL:
 
Back
Top