NV30 information.

3D acceleration will never be mature until it has reached the stage where it is indistinguishable from reality -- e.g. visual quality of the matrix

There will be times when it slows down, bumps up against tech limits, and we get diminishing returns for improvements, but in the long run, human beings demand visual simulations that match the real and hyperreal. And we will pursue that to the very end, whether in silicon or nanomolecular computers.

Thus, there's still a long road ahead of us. And of course, in the computing industry itself, there's much much more room to grow as well. We are not even capable of simulating even the relatively straightforward physics of protein folding yet (maybe ten years from it), but after we pass that hurdle, there is the problem of simulating the entire human cell (virtual cell project) which is orders of magnitude more difficult then the protein folding problem. Then we have the whole AI can of worms.
 
DemoCoder said:
3D acceleration will never be mature until it has reached the stage where it is indistinguishable from reality -- e.g. visual quality of the matrix

There will be times when it slows down, bumps up against tech limits, and we get diminishing returns for improvements, but in the long run, human beings demand visual simulations that match the real and hyperreal. And we will pursue that to the very end, whether in silicon or nanomolecular computers.

Thus, there's still a long road ahead of us. And of course, in the computing industry itself, there's much much more room to grow as well. We are not even capable of simulating even the relatively straightforward physics of protein folding yet (maybe ten years from it), but after we pass that hurdle, there is the problem of simulating the entire human cell (virtual cell project) which is orders of magnitude more difficult then the protein folding problem. Then we have the whole AI can of worms.

You never know maybe the Playstation 3 will be able to simulate the folding of protein. ;)
 
Anything's possible. Sony said PS2 could process emotions and guide missles. Why can't PS3 be merely 7 two hundred and fifty qubit processors, working in parrellel, to simulate every atom in the universe? :rolleyes:
 
Yeah, it doesn't sound "revolutionary enough" to justify all the comments coming out of Kirk and Carmack

Have to remember that even if this is true (as I'd be inclined to believe), it's not that descriptive at all.

I'm sticking by my earlier statement that programmability is infact directly related to transistor counts and the 'revolutionary' feature will be the programmability of the architecture. I have a feeling that the architecture will not only have a much more effecient microarchitecture, but it'll be more flexible than any chip to date from the Big 3. (P10's the wildcard)

'Revolution' or 'Innovation' can be accomplished by either a paradyne shift in the underlying architectural design or by rapidly increasing the amount of usable logic gates - either threw lithography advance (which nVidia is, bar-none, the best at using) or threw external parallel processing (Multichip, GRID computing).

I feel that were on the verge of an enormous jump in the processing power available to the average consumer.

Joe, nice parallel :)

PS. I think Baumann hit the nail on the head with how to get 'free' Multisampling in an IMR.
 
Pete said:
This talk of multichip (Spectre), free FSAA (GigaPixel), and 4-1 compression (FXT) sounds familiar. I've got a good feeling about this. As much as we may lament 3dfx's passing, surely we can't deny a 3dfx-nV pairing will be even better?

Free FSAA is M-buffer (3dfx), not tile-based (GigaPixel).

Mize
 
Yes, GP's selling point was completely free FSAA (no strings attached) is its MSAA rendering was equal (or as near as damn it as makes no difference) to its normal rendering rate. M-Buffer was basically a multisampling buffer - Rampage was only due to get 'free' FSAA under specific texturing conditions because of it Texture / pixel pipe arrangements. It still would have cost in terms of bandwidth as well, so it still wasn't 'free' - GeForce's MSAA with a 4:1 frame-buffer compression seems to be the most sane way of getting towards 'free' FSAA.
 
Xbox/GF Ti4600

Serious problem with this statement... Xbox NV2A runs at 233mhz with 64mb unified ram 8 GB bandwidth or less... (??)

GF4 ti 4600 is 300mgz running on 128mb 650mgz DDR ram.

The two are not even in the same league performance wise, NV2A performance is about the same as a GF3 due to its ram limitations. So Nv30 being 2x faster than a NV2A and Nv30 being 2x faster than a GF4 ti 4600 are Worlds appart.

I wonder what its performance will really be once all the typical Nvidia BS is cleared away.
 
Hellbinder, well theoretically what you are saying is true, however...

With an Xbox, you can optimize for a fixed specific platform, you probably do not have to deal with as much driver/OS overhead as
on a PC, you are not limited by the AGP bus, etc...

I really don't know enough to say if this makes up for all the core/mem clock speed differences between NV2A and the ti4600 in real world situations, but I assume it makes up for some of it at least. So maybe it's not a completely ridiculous performance comparison to make.

just my 2c,
Serge
 
Back
Top