It should be interesting to know if at the same time ATI is sampling its R300. Maybe they had already shown working sample to developers..The main reasons the NV30's features are being kept secret from the general public is to (a) make them seem more interesting, (b) make it more difficult for ATI's marketing to counter them, and (c) not cut in to sales of existing cards
Umh..I don't think soHopefully some developer who attends the NV30 presentations will let us know what the chip's features are!
emh..WHEN?I have no earthly idea about that meeting, but I can tell you that the chip design specification happens a *long* time before...Well, a long time.
I'm not so excited about features, I'm excited about architecture. I do really hope nvidia and ati this time comes out with something more revolutionary than evolutionary, but I know that even if they employ the smartest guys that come out with such wonderful stuff and ideas they have a market to address and so they are taking little steps...Anyhow, there is a very good reason why nVidia executes as well as it does, and they are exceptionally good about "feature creep," or the lack thereof...
On 2002-02-27 20:19, nAo wrote:
I would like to know with heavy shader how much time all those functional units built-in in pixel pipes are sitting idle :smile:
They should 'migrate' toward a rasterizer where the hw can walk along the polygon extracting one or more pixels to work on per clock, with a control unit that could manage many pixels at time (like 8 or more) issueing each clock a plurality of operations on some functional units (like bi-trilerp on 1D,2D,3D textures and cubemaps sampling units, DOT1/3/4 units, RCP units, and so on..) shared between all the pixels the fragment walker has issued. In two words: total flexibility :smile:
Obviously this can be done in so many ways...we know how CPUs are gone far in this field with oooe + s/c/m/threading execution.
ciao,
Marco
I imagine this isn't much different from what they already do. Except you might be saying that each pipe can do different operations in parallel. It is likely that current architectures flush the pipes when changing context or render states.
The obvious features will be better AA, better anistropic filtering and a lot of higly programmable functions. Maybe will have some revolution here, at least I heard of some.. :smile:I also don't thing we're going to see any architectural changes that everyone will think is revolutionary. Maybe a few features here and there
It's better to be grounded...deferred rendering is unfurtunately almost dead in the consumer market. Maybe they did the right choice...I definitely don't think nvidia going to a TBR will be a revolutionary change, although it would be quite a big moment in graphics history to see a new architecture start to take over. Kind of like the architectural changes in the hard drive industry over the years.