alexsok said:
I knew it had to be there, but no harm's done to beyond3d
alexsok said:
g__day said:I find it hard to believe - like all the skeptics. I understand h/w OpenGL calls are going to be alot faster than general distributed s/w of fast CPUs which is all a rendering farm is really. But 50,000 times faster - initially I thought not.
But remember John Carmack himself said just a few months back it will be possible sometime in 2003 - most likely in late 2003. JC knows what he is on about. While folk said its 20 years away they were using s/w running on a 100+ fast CPUs vs specialised parallel H/W in a GPU. I know the arguments, the joy is in 2 weeks we may get our first glimpse - and within a year the winners and losers in this bet should be clear.
DemoCoder said:What's important the end is not whether they are calculating every pixel using the exact same algorithm as PRMan with the example same input data, but whether it is a reasonable facsimilie on your monitor. For most people they can drop down the texture resolution and geometry load and they won'tl notice unless they projected it at 4000x4000 resolution.
DemoCoder said:What's important the end is not whether they are calculating every pixel using the exact same algorithm as PRMan with the example same input data, but whether it is a reasonable facsimilie on your monitor.
martrox said:It's martrox
not matrox or martox.......<sigh>
well better to be misspelled than ignored.....
(all in fun, guys)
There's no way it can do the whole film in realtime, but at a lower resolution, with reduced FSAA, texture filtering, geometry, and no motion blur, it may be able to do pretty well for most of the less complex scenes (no explosions/complex transparent effects).
After all, each frame only averaged 18.42 layers. It's not unbelievable that the NV30 could render that many at a reasonable resolution in realtime (Around 1024x768 with mild FSAA, though maybe a bit lower). That is, of course, unless the shading effects are particularly complex.
Regardless, the GeForce3's rendering of Final Fantasy had the geometry turned way down, for a very short sequence, with very limited shading effects. If the NV30 can do close to full geometry (enough for pixel-subpixel geometry at the chosen resolution...which does mean less than the movie used), and full shading effects, for a decent sequence, then I'll be happy. This may be possible...but it would be hard to get it to run well on the NV30.
Update:
And, of course, if the NV30 can indeed do Final Fantasy-quality rendering, then it may be important for nVidia for more than just PR. If just one of these chips can actually get somewhere close to rendering a real Final Fantasy movie frame in realtime, just imagine if somebody (Quantum3D?) produced large arrays of these chips specially designed for offline rendering. The performance increase for these farms could be phenomenal.
Humus said:martrox said:It's martrox
not matrox or martox.......<sigh>
well better to be misspelled than ignored.....
(all in fun, guys)
Alright, Maxtor.
DemoCoder said:The primary difference will be the dataset size, and as you mentioned, film quality AA (both temporal and full scene), which IIRC is 64 samples minimum.
However, the lack of ray tracing is a non-issue, since PRMan is not a raytracer.
(PRMan is dog slow anyway, which is one of the reasons why Pixar sued Gritz and his fellow employers into oblivion taking ExLuna and BMRT off the market)
At least not on a single chip (more on that in a moment) For me, it's pretty simple.