NV30 Kitchen?

nAo

Nutella Nutellae
Veteran
Does someone know anything about a developers meeting held today in London by nVidia? It should be called 'NV30Kitchen' ora something like that..

ciao,
Marco
 
I would like to know if this is "What they want" -kind of meeting or "What they going to have" -meeting...

well anyways, for autumn launch nVidia should have almost first silicon out...
 
If I recall correctly, a "kitchen" is traditionally a hands-on developer's workshop. Which would indicate that they have working prototype hardware. (Although possibly with missing, broken, or slow features.)

If the chip launches in September, the design was finished last year. So I doubt very much that NIVIDA would be soliciting feedback from developers on what features they want in the chip.

I remember a few years ago, before the GeForce3 came out, NVIDIA did similar presentations on the NV20, which later became known as the GeForce 3.

The purpose of the presentations are probably to get developers interested in supporting the new features of the graphics card. Games take 2 years to make, so developers are currently planning Christmas 2004 games.

The main reasons the NV30's features are being kept secret from the general public is to (a) make them seem more interesting, (b) make it more difficult for ATI's marketing to counter them, and (c) not cut in to sales of existing cards.

Hopefully some developer who attends the NV30 presentations will let us know what the chip's features are!

P.S. DirectX 9 has generalized vertex shaders (with subroutines and branches) and generalized pixel shaders (with texture lookups and math mixed freely). I wonder if the NV30 directly implements either of these new shaders? It seems likely that it does.

<font size=-1>[ This Message was edited by: duffer on 2002-02-26 19:25 ]</font>

<font size=-1>[ This Message was edited by: duffer on 2002-02-26 19:27 ]</font>
 
I have no earthly idea about that meeting, but I can tell you that the chip design specification happens a *long* time before...Well, a long time.

I was actually quite amazed, to be honest with you, just how far out these things are "locked down"...so to speak.

To draw some sort of comparison...Consider a scenario in which a chip is not to be touched (so far as features go) some 9-12 months before it ever begins selling...then you backtrack X months for the actual development of the thing...and X months before that dedicated to specific R&amp;D issues.

Anyhow, there is a very good reason why nVidia executes as well as it does, and they are exceptionally good about "feature creep," or the lack thereof...
 
Duffer:
The main reasons the NV30's features are being kept secret from the general public is to (a) make them seem more interesting, (b) make it more difficult for ATI's marketing to counter them, and (c) not cut in to sales of existing cards
It should be interesting to know if at the same time ATI is sampling its R300. Maybe they had already shown working sample to developers..
Hopefully some developer who attends the NV30 presentations will let us know what the chip's features are!
Umh..I don't think so :cry:


Typedef:
I have no earthly idea about that meeting, but I can tell you that the chip design specification happens a *long* time before...Well, a long time.
emh..WHEN?
Anyway..at least we know that a NV30 C/C++ model does exist:
http://www.cygwin.com/ml/cygwin/2001-12/msg01168.html

Anyhow, there is a very good reason why nVidia executes as well as it does, and they are exceptionally good about "feature creep," or the lack thereof...
I'm not so excited about features, I'm excited about architecture. I do really hope nvidia and ati this time comes out with something more revolutionary than evolutionary, but I know that even if they employ the smartest guys that come out with such wonderful stuff and ideas they have a market to address and so they are taking little steps...
Besides, I lost my hope on R300 being a TBR...now I would like to bet on NV30. Did someone say they were going to use some gigapixel's stuff?
Ok, it's time to stop dreaming and to wake up!

ciao,
Marco

<font size=-1>[ This Message was edited by: nAo on 2002-02-27 10:46 ]</font>
 
nao,

I have strong doubts that the NV30 will have anything close to TBR. Rather a revolutionary IMR design, as I'd guestimate for the R300 too.

The only use I could personally see for Gigapixel's TBR patents from NVIDIA is in alternative markets (like f.e. consoles, mobile, PDA's etc etc) but then still not in the immediate future.
 
Ailuros, I share the same strong doubts...but let me dream 'till I can :smile:
 
i, as well, agree with ailuros... in fact, ill go so far as to say, dont expect any DR from the R300 or the NV30, expect revolutionary IMR.

<font size=-1>[ This Message was edited by: multigl on 2002-02-27 22:09 ]</font>
 
What do you mean by "revolutionary IMR" ?
8 pipes + directX9 features + 256bits memory bus ?
or a completely new rasterizer (a much more efficient one) ?
or a kind of DSP optimized for 3d ?

Guillaume

<font size=-1>[ This Message was edited by: Lessard on 2002-02-27 18:05 ]</font>
 
I don't feel that doubling the data bus should be called a revolutionary change in the architecture. Even adding a dx9 compliant vertex and pixel shader shouldn't be meant as revolutionary. What will they do?
They need more bandwith and I hope this is not achieved with just some faster external memory. Besides, I feel pixel pipelines as a concept will die sooner or later. Having 4 or more fat pipes that implement every aspect of a dx9+ pixel shader could be a waste of transistors.
I would like to know with heavy shader how much time all those functional units built-in in pixel pipes are sitting idle :smile:
They should 'migrate' toward a rasterizer where the hw can walk along the polygon extracting one or more pixels to work on per clock, with a control unit that could manage many pixels at time (like 8 or more) issueing each clock a plurality of operations on some functional units (like bi-trilerp on 1D,2D,3D textures and cubemaps sampling units, DOT1/3/4 units, RCP units, and so on..) shared between all the pixels the fragment walker has issued. In two words: total flexibility :smile:
Obviously this can be done in so many ways...we know how CPUs are gone far in this field with oooe + s/c/m/threading execution.

ciao,
Marco
 
Interesting nAo but isn't it too risky to release a such revolutionary product so soon ?
There won't be a lot of "true" DirectX 8 games this fall (if any) and the architecture you described is optimal for >= directX 8 games.
But if this kind of GPU could be twice as fast as the Geforce4 with Unreal2 and Doom3 games ...

PS: how many passes Unreal2 needs ? And is it a "true" DX8 game with DX7 compatibility for old cards or a true DX7 title with shader effects for newer cards ?
 
On 2002-02-27 20:19, nAo wrote:
I would like to know with heavy shader how much time all those functional units built-in in pixel pipes are sitting idle :smile:
They should 'migrate' toward a rasterizer where the hw can walk along the polygon extracting one or more pixels to work on per clock, with a control unit that could manage many pixels at time (like 8 or more) issueing each clock a plurality of operations on some functional units (like bi-trilerp on 1D,2D,3D textures and cubemaps sampling units, DOT1/3/4 units, RCP units, and so on..) shared between all the pixels the fragment walker has issued. In two words: total flexibility :smile:
Obviously this can be done in so many ways...we know how CPUs are gone far in this field with oooe + s/c/m/threading execution.

ciao,
Marco

I imagine this isn't much different from what they already do. Except you might be saying that each pipe can do different operations in parallel. It is likely that current architectures flush the pipes when changing context or render states.

I also don't thing we're going to see any architectural changes that everyone will think is revolutionary. Maybe a few features here and there. I definitely don't think nvidia going to a TBR will be a revolutionary change, although it would be quite a big moment in graphics history to see a new architecture start to take over. Kind of like the architectural changes in the hard drive industry over the years.
 
I imagine this isn't much different from what they already do. Except you might be saying that each pipe can do different operations in parallel. It is likely that current architectures flush the pipes when changing context or render states.

I don't believe this is what they already do. I think they have a way more fixed hw. AFAIK pixel shaders just say to the pixel pipe how to 'configure' itself. Once it is started it crunch pixels in a pipelined fahsion.
Maybe they have splitted the texture fetching and filtering units pipeline from the calc units pipeline to allow some kind of 'mild' parralelism. What happens if I need to draw a lot of untextured polygons (e.g.stencil)? I'll love to have an insanely high fill rate in that case, that is a LOT of very simple pixel pipes, so I would like to re-use all the parameter interpolators to just fill a lot of pixel per clock, even on different non-overlapping polygons at the same time. Obviuosly, with an iper-complex pixel shader (see Opengl2.0 specs :smile:) I know that I'll need a huge texture read bandwith and a high 'on paper' fillrate doesn't give me nothing cause most of the time the pipe will sit idle waiting some data to crunch, so I would like to have 1 or 2 effective working pixel pipes that employ ALL (or being realist, most of) the interpolators to work on a small pixels footprint.
Maybe the current hardware already have some kind of re-configurability when it connects a pixel pipe to another one, but that is just not enough.


I also don't thing we're going to see any architectural changes that everyone will think is revolutionary. Maybe a few features here and there
The obvious features will be better AA, better anistropic filtering and a lot of higly programmable functions. Maybe will have some revolution here, at least I heard of some.. :smile:

I definitely don't think nvidia going to a TBR will be a revolutionary change, although it would be quite a big moment in graphics history to see a new architecture start to take over. Kind of like the architectural changes in the hard drive industry over the years.
It's better to be grounded...deferred rendering is unfurtunately almost dead in the consumer market. Maybe they did the right choice...

ciao,
Marco
 
Back
Top