Final Fantasy & NV30

g__day said:
I find it hard to believe - like all the skeptics. I understand h/w OpenGL calls are going to be alot faster than general distributed s/w of fast CPUs which is all a rendering farm is really. But 50,000 times faster - initially I thought not.

But remember John Carmack himself said just a few months back it will be possible sometime in 2003 - most likely in late 2003. JC knows what he is on about. While folk said its 20 years away they were using s/w running on a 100+ fast CPUs vs specialised parallel H/W in a GPU. I know the arguments, the joy is in 2 weeks we may get our first glimpse - and within a year the winners and losers in this bet should be clear.

I think I remember reading that the key is a transition on how the frames are actually computed. No graphics card will be able to render FF:TSW or Toy Story according to the same algorithms and methods used by render farms. But they will be able to perform similar style calculations using other methods (shader passes) with increasingly frightening speed. The way things are done changes, and being locked into one method can get you passed over very quickly.

I think this emphasis on render farm type calculations blinds people to the fact that nearly the same effects can be done using alternate techniques on existing or near-term hardware. That's the true tranisiton as we move to "FF:TSW in real time."
 
The RenderFarms are working with *enormous* geometry loads, texture sizes, and output resolutions. I recall Pixar saying that during Toy Story, over 50% of PRMan execution was spent in *texture sampling* and disk i/o.

On your monitor, the NV30 is going to be using much smaller textures and it won't be tesselating triangles to the subpixel level. Once you drop the data sizes down, the NV30 and R9700 can approximate PRMan rendering. You'll have to do something about displacement map shaders, but otherwise, I don't see the big hurdle.

What's important the end is not whether they are calculating every pixel using the exact same algorithm as PRMan with the example same input data, but whether it is a reasonable facsimilie on your monitor. For most people they can drop down the texture resolution and geometry load and they won'tl notice unless they projected it at 4000x4000 resolution.
 
DemoCoder said:
What's important the end is not whether they are calculating every pixel using the exact same algorithm as PRMan with the example same input data, but whether it is a reasonable facsimilie on your monitor. For most people they can drop down the texture resolution and geometry load and they won'tl notice unless they projected it at 4000x4000 resolution.

Just as a comparison, I looked up the technical specs for the cameras used in Star Wars Episode II: Attack of the Clones. If you don't know, the movie was "filmed" completely digitally and broadcast in many theatres on digital projectors. The cameras used were modified Sony HDW-900 that recorded a 24P format with 24 frames/sec progressive scan at a resolution of 1900x1080.

I don't know what resolution render farms use for final transfer.
 
DemoCoder said:
What's important the end is not whether they are calculating every pixel using the exact same algorithm as PRMan with the example same input data, but whether it is a reasonable facsimilie on your monitor.

My problem with this is that NV is not telling us about its optimizations its used. If this was an ideal world they they would say "we can closely apprimate FF in real time." But no, the make the masses believe that they can go FF in real time. Just go look at other forums where people are blinding following this statement. However I also realize that EVERY company has done this and if I cry about one, I should cry about them all. I just dont have that much time :)

So yea every time I see this I just can help thinking yea right. Its great that NV keeps pushing things!

Matrox.
LMAO! Nice one.
 
It's martrox

not matrox or martox.......<sigh>

well better to be misspelled than ignored..... :rolleyes:

(all in fun, guys)
 
Standard movie resolution is 2048*1536 pixels anamorphic with an aspect ratio of 2,35 : 1. That's roughly 1,6 times the pixels compared to 1600*1200.

Democoder's right though, main differences compared to realtime graphics are subpixel tesselation of HOS geometry (NURBS, B-patches, subdivs), extremely high res textures and I'd add stochastic motion blur and DOF as well. Oh and don't forget about antialiasing - there's close to zero tolerance when rendering for film, so you must crank up the sampling rate real high.
 
There's no way it can do the whole film in realtime, but at a lower resolution, with reduced FSAA, texture filtering, geometry, and no motion blur, it may be able to do pretty well for most of the less complex scenes (no explosions/complex transparent effects).

After all, each frame only averaged 18.42 layers. It's not unbelievable that the NV30 could render that many at a reasonable resolution in realtime (Around 1024x768 with mild FSAA, though maybe a bit lower). That is, of course, unless the shading effects are particularly complex.

Regardless, the GeForce3's rendering of Final Fantasy had the geometry turned way down, for a very short sequence, with very limited shading effects. If the NV30 can do close to full geometry (enough for pixel-subpixel geometry at the chosen resolution...which does mean less than the movie used), and full shading effects, for a decent sequence, then I'll be happy. This may be possible...but it would be hard to get it to run well on the NV30.

Update:
And, of course, if the NV30 can indeed do Final Fantasy-quality rendering, then it may be important for nVidia for more than just PR. If just one of these chips can actually get somewhere close to rendering a real Final Fantasy movie frame in realtime, just imagine if somebody (Quantum3D?) produced large arrays of these chips specially designed for offline rendering. The performance increase for these farms could be phenomenal.

I understand what you are saying, and similar comments by others here as well. and I agree for the most part.

It's just that it is a world of difference between being able to do the polygon counts and shading of the simpler scenes, with greatly reduced resolution, texture quality/res, FSAA, etc, from saying that "NV30 can do FF:TSW in realtime" because everyone agrees that is not possible or even remotely close to possible in the near-term.

At least not on a single chip (more on that in a moment) For me, it's pretty simple. Can we expect games (that need 30-60fps motion to be playable & smooth) with FF movie graphics on NV30 or even any refresh of NV30? No way. Lets forget FF the movie for now. Think about less advanced and older CGI - like Toy Story, or not even that. think of television quality CGI from the early-to-mid 1990s. are games going to even look that good on NV3X cards? I don't the answer to that. I'd say probably not. maybe not until NV40~NV50. But I think FF should be forgotten about for now, in terms of gaming or even slower real-time rendering (10fps or less)

Sure, NV30 is a huge step up in rendering precision and quality, as well as pixel & polygon pushing performance. But it isnt even close to rendering everything from CGI movies in realtime, even at reduced resolution, texture size and FSAA. it still isnt going to be there.

Now, about what you mentioned in your last paragraph, about large arrays of GPUs like NV30 and R300. Perhaps they could be extremely useful for off-line rendering. but imagine what they could do in realtime rendering. SGI did this with 16 or more pipelines in there highest end series of real-time visualization systems, to reach several 100M vertice/sec rates in the late 90s. Imagine an array of modern GPUs used for realtime. 256 R300s can supposedly be used together. - think of what 256 R300s or NV30s could do. Several large boxes stuffed with NV30s or R300s might be able to do Toy Story in realtime. maybe even 256 could do FF:TSW at 60fps. (maybe if it's 256 NV35s or R350s to be on the safe side :)

Think of what a highend box no larger than a PC could have put into it.
maybe 32 R300s or NV30s - If PCs could be equiped with large numbers (not as many as 256 though) of GPUs and sold for under $3000 - we could have a revolution in realtime 3D at near-consumer prices. I have no idea if this is going to happen or not, I'm not in the industry, but I follow it as best I can because it's interesting to me.

Perhaps Nvidia and ATI are going to be moving into a CELL-like approach in the next several years. Imagine they design the best core that they possibly can, with really high IQ, texture/shader ability. An extremely high quality and efficent core. Then Nvidia and/or ATI take that core and do what IBM+Sony are doing with Cell. They take that GPU core and put as many of them together on 1 die as they possibly can. Easily shattering the 1 billion transistor mark in the process. it wouldn't even have to be clocked very high. say, 500 Mhz, which won't be that high by 2004-2005. Perhaps that is the only near-term way to get anything like
low-end CGI graphics done in a single chip that can be used in PC cards or consoles for realtime consumer applications
 
You will call me Matrix... not Maxtor... dammit :p

I can render FF in RT without any problems at all using the most advanced technology known to man.. the human mind ;)
 
The Doom3 engine is the technically most advanced 3D realtime renderer right now I believe that we can agree in that. Its features are at the most 3ds max R1 quality - that is 1996.
- Few number of per-pixel lights with hard shadows
- Simple materials with color, bump and specular only (this might change for the final version though)
- Low poly counts

- No raytracing
- No area lights + shadows
- No hair (and that is real RiCurve primitives /cylinders/ and not opacity mapped polygons...)
- No post processing for glows, glooms, shadows,
- No motion blur
- No DOF
- etc.

The old FF "realtime" demo seemed like it used shadow buffers and vertex lighting with specular maps only. That's faaaaar from prerendered quality.

Keep in mind that Square has usually rendered several layers for their characters (color, specular, shading for each light, backlighting, side speculars, reflections, hair, shadow) and combined them together in 2D compositing software to manually fine-tune the looks. These might be collapsed by sacrificing control and quality, but it's still too much for current hardware to do in realtime. Maybe an NV30 can render it faster than your average SGI Octane, but it's still not the same ;)
 
The primary difference will be the dataset size, and as you mentioned, film quality AA (both temporal and full scene), which IIRC is 64 samples minimum.

However, the lack of ray tracing is a non-issue, since PRMan is not a raytracer.


I'd say that the NV30 with a custom rendering package (Gritz's ExLuna?) could easily best a typical Octane or Pentium4/AMD/Alpha rendering box (or an ART RenderDrive). The NV30 blows away these CPUs in terms of bandwidth and vector processing power.

So, the NV30 could not render a full-quality full-frame Final Fantasy The Movie in "real time", but used as an offline rendering accelerator, it would still outperform a Linux or SGI box running PRMan in pure software.

(PRMan is dog slow anyway, which is one of the reasons why Pixar sued Gritz and his fellow employers into oblivion taking ExLuna and BMRT off the market)

Too bad BMRT was never released under an open-source license.
 
DemoCoder said:
The primary difference will be the dataset size, and as you mentioned, film quality AA (both temporal and full scene), which IIRC is 64 samples minimum.

Yes, yes. Antialiasing depends... it's usually adaptive, from 1 sample to whatever you need (4, 16, 64...). This is one of the more important aspects of an offline renderer BTW.

However, the lack of ray tracing is a non-issue, since PRMan is not a raytracer.

That depends on what you want to do. You can already fake reflections and refractions with cube maps and normal-based distortion, but there might be cases where you need to trace. Gritz was originally hired by Pixar to connect BMRT to it for the cases they've needed raytracing (for example in Bug's life). Even ILM has just signed a contract with Mental Images to integrate their raytracer into their pipeline... as CPU's get faster, there are more and more practical uses of raytracing in the industry.


(PRMan is dog slow anyway, which is one of the reasons why Pixar sued Gritz and his fellow employers into oblivion taking ExLuna and BMRT off the market)

Actually PRMan is pretty fast - it's the fastest when you do displacement and motion blur combined with insane geometry and texture detail. It's actually as fake as it gets with an offline renderer - you only get a handful of features (that's what they're changing with the new version though) but the price for your sacrifices is speed and stability.

Exluna was more likely sued because they've offered a competitive renderer for the fraction of PRMan's $5000/CPU price... but it was usually slower as far as I know. A pal of mine has been working with it just when the decision came to kill Entropy, and he said they weren't really blown away with the speed; they've even resorted to the standard 3ds max renderer for some shots.


Anyway, if the NV30 and co. can hardware accelerate rendering, then it'll most likely conquer smaller studios first because of the slow momentum of the big CGI houses - I guess that's why ATi has been starting with Maya/Max renderer translators instead of Renderman first. Pretty exciting possibilities though, can't wait to get our hands on such a card ;)
 
At least not on a single chip (more on that in a moment) For me, it's pretty simple.

I think Sony GS Cube, was the first to rendered a scene from Final Fantasy in real time. It was the scene where Aki floats in that zero gravity room. It render in high res too. I am pretty sure its downgraded from the movie quite a bit, but it looks pretty good, much better than the NVIDIA one.

Though that GS Cube, had a monster combined bandwidth and filrate, I am not suprised if they actually just multipass the scene many times.
 
Well, the reasons behind the suit against ExLuna are complex. I talked to a Pixar guy at one point who told me that ExLuna drastically sped up the file I/O and memory management in their renderer, changes which may have made it back into later PRMan versions. Notice how the newest versions of PRMan seem to have all the features of ExLuna and BMRT (except for radiosity I think).


Pixar is in a pickel with PRMan. They hardly make any revenue from it, because of the way it is priced (insane, and no volume discounts), and could easily make tons more if they simply dropped the price and sold on volume. On the other hand, Pixar sees PRMan as their "ace in the hole" against other studios, since they rake in far more revenue from their films, and I believe that they see some value is preventing every Tom, Dick, and Harry studio being able to run large PRMan renderfarms cheap.

ExLuna was a huge threat to that. A third party RM alternative from a reputable source, that was priced cheaper, had better features, and (from what I heard) was faster. So Pixar pulled out the big lawyer guns and not only had ExLuna pulled from the market, but BMRT as well, which was written before Gritz ever worked for Pixar. That's what really pissed me off about Pixar. That they had a free tool that was mainly used by college students across the country from the market.


And they had all this done on some really flimsy evidence against Gritz. They searched through all the BMRT code until they found anything even resembling any code pattern in PRMan source, and then used that as a basis forcharges against Gritz and co-workers. They also trumpeted out some bogus software patents.

I mean, people talk about Microsoft being anti-competitive, but MS competitors usually die by failing in the market. Here a promising company was destroyed by lawyering.
 
Something that i didn't see on the Korean site

gfmaniaday39.jpg


gfmaniaday40.jpg


That looks pretty good quality to me.

US :eek:
 
Back
Top