some strange reason i have a post to make-re NV30

Doomtrooper said:
Mulciber said:
Doomtrooper said:
Nvidia is 5 months late on this claim, the R300 ran Lord of the Rings in Real Time along with technology demos also done in real time..scratch that one off.

lol, and you screamed bloody murder when nVidia rendered a scene from Final Fantasy: The Spirits Within

I've seen the Demo shown a Siggraph..maybe one should wait and see it themselves. I never screamed anything..must be thinking of someone else you stalk.

maybe it was hellbinder, I get you two confused :p
 
There are only a coupple of things that bug me about this...

First Ati themselves clearly stated that the R300 was teh target for DX9. I'll do a search for the quotes..

Second.. Why is john Carmack making statements like this??? Especially after he stated that the R300 had the perfect feature set for Doom III? yet he now says..

My current work on Doom is designed around what was possible on the original Geforce, and reaches an optimal impliementation on the NV30

Clearly he had these specs at E3.. why even waste time saying that ATi has the perfect feature set for doom. now he tuurns right around and says they are not good enough.... ????

The entire thing is rather troubblesome to me.. There is not THAT much difference between the two cards.. one is more programable and handles more instructions.. but... really... 3 card cycles from now will the stats or speed of these cards really matter?

Finally... Does anyone else thnik that it is a little below the belt for Nvidia/microsoft to change the specs at this point..??? especially when the current specs for the R300 are so far beyond the last generation already??

I have some more questions.. but ill wait for more information first..
 
If they change the specs now, it would be pretty screwy. That would make it twice that Microsoft changed the specs after Ati designed their board for it. I really don't see that happening.
 
Qroach Said:
just need to get this striaght. I saw many people screaming murder because Nvidia wasn't DX 8.1 compatible, and when they are fully DX 9 compatible they still screamed murder becuase they NV went beyond the the DX9 spec? When in reality, they are fully DX9 compliant and it appears that ATI isn't?

??

First of all, I don't see anyone screaming murder one way or another. Any company who delivers a part with more "advanced" pipelines is cool. There are of course healthy debates to be had on whether the "more advanced" nature has any practical impact on the product and its value to the consumer.

Second,

Where is anyone getting the idea that R-300 won't be DX9 compliant, or that Ms is changing DX9 specs?! All I can gather from Ben6's post with respect to DX9 and these parts is timing. R-300 will arrive before DX9 is released, and NV30 and DX9 should arrive at a similar time.
 
Thanks Ben! I think for the most part, this is all information that has been disclosed in the past few days by nVidia at Siggraph or on their website. The things that I find interesting:


3. Vertex Shaders programs can be 65,536 instructions in length with the loops branches etc , it's also considered one pass .


What is the actual number of operations for the shader on an NV30? In the Siggraph notes it was 256, which corresponds to the R300 program space. In the CineFX markitechture paper it was noted as 1024 under the NV3x column. Are they referring to NV35 in that specific case? 256 makes sense for NV30.


6. NVIDIA will be using a form of DDR2 memory . This is CONFIRMED information. What clockspeeds , what bit interface, or memory bandwidth I
cannot confirm at this time


Given David Kirks 128 bit comments, it was definitely assumed that they would go DDR-2 in order to compete with R300, but there was never any confirmation. So this is new (at least to me!). The question is that if they do indeed go with 128bit interface, will they be able to keep an 8x2 architecture fully fed? If they go with 256 bit then ATI needs to get their R300/DDR2 boards out ASAP if they want to remain somewhat competitive.


7. 12bit, 16bit 32bit float formats supported


12 bit float format? That doesn't seem to make a lot of sense no matter how you look at it. Precision sucks and it doesn't align nicely. Only thing it could possibly be used for is to take displayable surfaces to the next level by using 12 bits per component rather than the current 10 in parhelia and R300.


The architectural decision in the NV30 to allow full floating point precision all the way to the framebuffer and texture fetch, instead of just in internal paths, is a good example of far sighted planning.


Same as R300. This whole quote seems like it's the marketing quote they got from him for their NV30 new release. Don't read more into it than exactly what he says.


11. NV30 and DX9 schedules are now aligned. This may change if Microsoft delays DX9 , but not the other way around.


Hmm, what to make of this comment?

1) NV30 is definitely coming out in November
2) NV30 is definitely coming out after DX9 release

To reiterate Joe's point, nobody has said that R300 is not DX9 compliant. In fact ATI has made a statement saying that it is. Nobody's changing any specs at this stage.
 
CMKRNL,

If the 128 bits through the entire pipeline is the same on both. and their base is 256 the same as R300...

WHY is John Carmack posting these kinds of comments????

This is going to give MANY people the wrong concept of the situation..
 
jjayb said:
If they change the specs now, it would be pretty screwy. That would make it twice that Microsoft changed the specs after Ati designed their board for it. I really don't see that happening.

Especially when ATI was pimping DX9 in the 9700 launch here..

http://www.on24.com/clients/ati/ond...f=&contenttype=&format=&userreg=n

30.png
 
WHY is John Carmack posting these kinds of comments????

There is one logical explanation: That comment is possibly old. It is pretty obvious to me that nVidia briefed the press and developers on Nv30 before ATI did the same for R-300.

Obvious not from this quote, but from people like Anand who had commented a few months ago on how "Great" NV-30 looked on paper, while not having heard any info on R-300 yet.

I also remember, for example, on of the Cg coverages, that someone put up a "slide" from an nVidia presentation with comments from Carmack that was not legible. The commentary from the web reviewer was along the lines of - "nah...nVidia didn't have any comments from Carmack about Cg, THESE comments were about NV30...but we can't show them yet."

In short, I will not be surprised at all to find out that Carmack made those comments upon being briefed about Nv30 specs, and before he was briefed fully on R-300. Of course, he could only assume at that time that NV30 would beat R-300 to market...
 

If the 128 bits through the entire pipeline is the same on both. and their base is 256 the same as R300...

WHY is John Carmack posting these kinds of comments????


I guess the first question is, where is this quote coming from? Did he post this in his .plan file? If not, then there's a good chance that this is simply a marketing quote that nVidia obtained from him for their NV30 announcement news release. Companies like ATI and nVidia typically try and get some big name quotes to help validate the significance of their products to the general public.

In any case, he's not saying that R300 doesn't do the same things. It's simply saying that NV30 DOES do these things. It just reads very marketing-like to me. If this is indeed in his .plan file, then I take it back because JC doesn't pull any punches there and will always speak his mind. Of course the other possibility as Joe pointed out is that this quote is old and perhaps not in context as he may not have had R300 information at the time he made this.
 
I'm trying to get a date on that Carmack quote, it was after he had the R300 however. Cg launch was June 13, 2002 , E3 was May 22, I believe that the comment was from this timeframe , but am waiting a response from Mr Carmack or Nvidia.
 
12. DX9 shift from bandwidth to computation. Pixel Shading is not pixel filling

Also, it's starting to seem more and more evident to me that NV-30 is not going to be a world-beater in performance with today's games. At this point, here's my assumptions of NV30:

16 Texture / "Shader" units, compared to R-300's 8
128 bit, DDR2 memory, assume maybe 450Mhz: Approx 15 GB/sec.

What that would appear to be is a very "bandwidth limited" part...when in heavy "fill rate" situations.

That would, however, fit in very nicely with the quote on top of this page, and other nVidia comments about "pixel quality" and not performance. Reading into the "pixel shading is not pixel filling" quote above, it seems that heavy use of pixel shading ops is more computationally expensive, not bandwidth/fillrate. So the NV30 architecture may be more "balanced" for very heavy pixel shading scenarios, while being unbalanced for heavy "fill rate" scenarios.

What that would imply, is that performance in todays games and games for the forseeable future, I'd expect Nv30's performance to fall in line according to its memory bandwidth. So if it's 128 bit DDR-2, then I would expect NV30 to fall between Ti-4600 and R-300 performance in games.

On the other hand, in the "3D renderfarm" type market (loads and loads of pixel shading ops), the extra "computational power" of the NV30 would pull it ahead, performance wise, of the R-300. Bandwidth is less important.

All speculation of course....can change tomorrow if Ben6 decdes to post again. :)
 
Joe DeFuria said:
12. DX9 shift from bandwidth to computation. Pixel Shading is not pixel filling

Also, it's starting to seem more and more evident to me that NV-30 is not going to be a world-beater in performance with today's games. At this point, here's my assumptions of NV30:

16 Texture / "Shader" units, compared to R-300's 8
128 bit, DDR2 memory, assume maybe 450Mhz: Approx 15 GB/sec.

What that would appear to be is a very "bandwidth limited" part...when in heavy "fill rate" situations.

That would, however, fit in very nicely with the quote on top of this page, and other nVidia comments about "pixel quality" and not performance. Reading into the "pixel shading is not pixel filling" quote above, it seems that heavy use of pixel shading ops is more computationally expensive, not bandwidth/fillrate. So the NV30 architecture may be more "balanced" for very heavy pixel shading scenarios, while being unbalanced for heavy "fill rate" scenarios.

What that would imply, is that performance in todays games and games for the forseeable future, I'd expect Nv30's performance to fall in line according to its memory bandwidth. So if it's 128 bit DDR-2, then I would expect NV30 to fall between Ti-4600 and R-300 performance in games.

On the other hand, in the "3D renderfarm" type market (loads and loads of pixel shading ops), the extra "computational power" of the NV30 would pull it ahead, performance wise, of the R-300. Bandwidth is less important.

All speculation of course....can change tomorrow if Ben6 decdes to post again. :)

sounds reasonable to me
 
Joe DeFuria said:
12. DX9 shift from bandwidth to computation. Pixel Shading is not pixel filling

Also, it's starting to seem more and more evident to me that NV-30 is not going to be a world-beater in performance with today's games. At this point, here's my assumptions of NV30:

16 Texture / "Shader" units, compared to R-300's 8
128 bit, DDR2 memory, assume maybe 450Mhz: Approx 15 GB/sec.

What that would appear to be is a very "bandwidth limited" part...when in heavy "fill rate" situations.

That would, however, fit in very nicely with the quote on top of this page, and other nVidia comments about "pixel quality" and not performance. Reading into the "pixel shading is not pixel filling" quote above, it seems that heavy use of pixel shading ops is more computationally expensive, not bandwidth/fillrate. So the NV30 architecture may be more "balanced" for very heavy pixel shading scenarios, while being unbalanced for heavy "fill rate" scenarios.

What that would imply, is that performance in todays games and games for the forseeable future, I'd expect Nv30's performance to fall in line according to its memory bandwidth. So if it's 128 bit DDR-2, then I would expect NV30 to fall between Ti-4600 and R-300 performance in games.

On the other hand, in the "3D renderfarm" type market (loads and loads of pixel shading ops), the extra "computational power" of the NV30 would pull it ahead, performance wise, of the R-300. Bandwidth is less important.

All speculation of course....can change tomorrow if Ben6 decdes to post again. :)

of course if they do have a superior anisotropic filtering and anti-aliasing technique up their sleeves, it could level out the performance at extremely bandwidth limited situations and still provide extremely good looking pixels :)
 
Umm...that John Carmack comment is familiar in my mind from last week...I've seen it quoted by someone prior to this, and am having a serious case of deja vu. I had the impression that it was part of info released about the time of the R300 launch, but represented an older quote by Carmack.

I think you are getting all in a tizzy about nothing, Doom and Hell. There is nothing in the info that ben listed that is surprising, just confirmation of some things that were speculation only before. What Joe said, etc.

EDIT: Well, gee ben, I guess the deja vu is because this is the second time you've quoted it on this board, :LOL:
 
Doomtrooper said:
Nvidia is 5 months late on this claim, the R300 ran Lord of the Rings in Real Time along with technology demos also done in real time..scratch that one off.

And the GeForce4 ran Final Fantasy in realtime. We don't know how much better the R300's demo really was.

[qutoe]DDR2 will not compete with a 256 bit bus, and I see no mention of one here.[/quote]

Why not? DDR2 on a 128-bit bus should put nVidia within roughly 20% of the bandwidth of the R300. I see no reason why this difference cannot be made up for with more advanced memory bandwidth savings technology.

1024 intructions is over kill....it would be dead slow running all 1024 in one pass.

To be accurate, it's 1024 instructions per pass minus the number of constants used (up to 512 constants). And why is it overkill? While I agree that it would probably hurt performance very significantly, we have no idea how many clocks it would take to produce a pixel with such a long program, and therefore just how much performance it would eat up. In other words, it may yet be feasible to implement such long programs in games in a spradic manner.

Additionally, even if a 1024-instruction program would absolutely murder performance on the NV30, it's still better than multipass, and it reduces the need for auto-multipass when you want to consider backwards-compatibility.

Bens information is interesting but from past experiences has been 'dead' wrong 50% of the time.

I'd really like to see some links, if you can find some information he's posted in the past that's been wrong.
 
I'd love to, since the old forums are gone..not possible but some other veteran members will remember the Geforce 3 Ti threads where he stated it was just not a refresh..

If you want to see a R300 do some real time rendering...check the movie links out below and Fast Forward to about 75% of the movie feed...there is lots of Real Time demos and the final lasts about 5 mins.

http://www.on24.com/clients/ati/ond...f=&contenttype=&format=&userreg=n
 
wow that's going to haunt me for years :oops: . Seriously though, maybe one day I'll tell the whole story of what happened . For now, I can't
 
Doomtrooper said:
I'd love to, since the old forums are gone..not possible but some other veteran members will remember the Geforce 3 Ti threads where he stated it was just not a refresh..

A couple of things about that.

It seems to be more or less accepted now that nVidia actually had some form of the GeForce4 cards ready at the time they released the GeForce3 Ti.

nVidia also did state publicly before then that the next GeForce card "wouldn't just be another refresh."

In other words, things changed. They always do. You can't take much of anything that is said before a product actually hit the shelves with certainty.

That example doesn't damage his credibility at all to me.
 
Backup the truck, I enjoy reading Bens posts...but he has been wrong in the past and I personally take them with a grain of salt. Nothing to do with credability :rolleyes:


BTW ensure you check the real time 'Pipe Dreams Demo' that was rendered offline last year @ 10 fps and is running in real time @ 50 fps on the 9700.
 
Doomtrooper said:
Backup the truck, I enjoy reading Bens posts...but he has been wrong in the past and I personally take them with a grain of salt. Nothing to do with credability :rolleyes:

Just consider the type of information that is being given out.

As long Ben is correct about the current state of design of the NV30, there are many things he cannot be realistically incorrect about.

For example, it's pretty much assured that the design of the NV30 has been finalized for some time. So, all of the explanations of the features of the NV30, assuming Ben's info is currently correct, will remain correct when the NV30 ships.

The data on the ship date, however, may not be accurate (and since it's somewhat dated, by his own admission, this doesn't seem unlikely). Also, since MS has apparently delayed DX9's release until November (According to other posters here), that's probably the currently-planned timeframe for release of the NV30. But...nobody, not even nVidia, is absolutely certain on this. They may have a better idea than anybody here, but things happen.
 
Back
Top