Final Fantasy & NV30

alexsok

Regular
nvnews reports the following:

NVIDIA Mania Day - 11/04/02 7:51 am - By: pelly - Source:
I just received an email from one of our readers who attended the NVIDIA Mania Day held in Korea. Apparently, David Kirk spent the majority of the time covering GeForce4 products/technology. However, he did manage to give the audience one small bit of information regarding NV30.

...but he mentioned that NV30 will have the power enough to render 'final fantasy' at realtime

Now I know this has been thoroughly discussed here already bit by bit, but I felt it was worth bringing it up again, since it's coming directly from NVIDIA's top guy.

Do you really think NV30 will be capable of rendering final fantasy in real time? Yes, I mean with all the special effects and such, nothing downgraded as it appeared to be the case on the Quadro.

I think it's way too unrealistic at this point...
 
David Kirk is the PR Man...

its stripped down somehow. We are still a few years off from that, but much closer than before.
 
I wish they would stop claiming that, its getting old... :rolleyes:

jandar> AFAIK Kirk is Nvidia's chief scientist, not a PR guy
 
Sigh, not this again. Here are the stats on FF:


http://arstechnica.com/wankerdesk/01q3/ff-interview/ff-interview-1.html


Here are the stats incase you dont have time to read:

Number of Sequences = 36
Number of Shots = 1,336
Number of Layers = 24,606
Number of Final Renders (Assuming that > everything was rendered once) = 2,989,318
Number of Frames in the Movie = 149,246
Average number of shots per sequence = 37.11
Average number of rendered layers per shot = 18.42
Average number of frames per shot = 111.71
Estimated average number of render revisions = 5
Estimated average render time per frame = 90 min
Shot with the most layers = (498 layers)
Shot with the most frames = (1899 frames)
Shot with the most renders [layers * frames] = (60160 renders)
Sequence with the most shots = (162 shots)
Sequence with the most layers = AIR (4353 layers)
Sequence with the most frames = (13576 frames)
Using the raw data (not the averages) it all adds up to 934,162 days of render time on one processor. Keep in mind that we had a render farm with approximately 1,200 procs.

Troy: ...and that's just final renders. Including test renders, revisions, and reviews, it's much more. SQB (the render farm software) ran ~ 300,000 jobs, w/ an average of 50-100 frames/job (depending on the type of job). For storage, we have about 4TB online (and pretty full, most of the time...).


David for the last year has spoken more like a PR guy and less like the CS that he is. I enjoyed the old David that just stuck to the facts and did not bost about his products...
 
Do you really think NV30 will be capable of rendering final fantasy in real time? Yes, I mean with all the special effects and such, nothing downgraded as it appeared to be the case on the Quadro

No way. Not even the NV50 (including any concivable XBox2) will be able to do FF The Spirits Within, in real-time at 24-60fps, with nothing downgraded. A 1:1 translation into realtime with all the effects, resolution, geometry, lighting, AA, is not even close to possible on NV30. No matter how good the Nv30 is. Now, a realtime version of FF TSW could look much better than the GeForce3/4 version, while running at a better framerate.

Perhaps Nv30 would be able to approach 1/10th of the film's quality technically, while at the same time appearing to be maybe 1/5th as good to the human eye, but in no way could Nv30 reproduce the film in realtime 100% quality.
 
Perhaps Nv30 would be able to approach 1/10th of the film's quality technically, while at the same time appearing to be maybe 1/5th as good to the human eye, but in no way could Nv30 reproduce the film in realtime 100% quality.

Lol, but if you look at it sideways you can get 1/3rd of the film quality rendered with 1/2 the farm. Or maybe 1/5th the quality is only 1/60th of the time, and 2/3rds of the fram look at 7/9ths the quality?

Fun with arbitrary fractions.

Are you ready? Coming soon.
 
Ever since geforce3 nvidia has been boasting about "rendering Final Fantasy in realtime". Nothing new here.

Heh...actually, I'm still waiting for "Toy Story" in real time. Some-how, we never made it to THAT milestone, but jumped right to Final Fantasy....
 
alexsok said:
Do you really think NV30 will be capable of rendering final fantasy in real time? Yes, I mean with all the special effects and such, nothing downgraded as it appeared to be the case on the Quadro.
We've been over this before. FF The Spirits Within features lots of effects that can only be achieved through post processing and many, many layers (up to 498 according to the stats posted by jb). That's, simply put, impossible to be handled in RT.

ta,
-Sascha.rb
 
Heh...actually, I'm still waiting for "Toy Story" in real time. Some-how, we never made it to THAT milestone, but jumped right to Final Fantasy....

heh, exactly.

Now maybe, just maybe, NV90 or XBox3 will be able to do Toy Story 1 in realtime, in about 9-10 years. but anyone who seriously thinks NV30 can even remotely approach FF:TSW, probably the highest quality CGI film ever, in real-time (meaning interactive 30-60fps) is buying in to too much of Nvidia hype machine. Nvidia's PR needs to be reigned in. As if
NV17/ GeForce "4" Mx isn't bad enough, the PR department wants us all to believe each new Nvidia GPU can reproduce movie quality prerendered graphics. :LOL:


Sony will do the same thing with Playstation3. mark my words.
 
There's no way it can do the whole film in realtime, but at a lower resolution, with reduced FSAA, texture filtering, geometry, and no motion blur, it may be able to do pretty well for most of the less complex scenes (no explosions/complex transparent effects).

After all, each frame only averaged 18.42 layers. It's not unbelievable that the NV30 could render that many at a reasonable resolution in realtime (Around 1024x768 with mild FSAA, though maybe a bit lower). That is, of course, unless the shading effects are particularly complex.

Regardless, the GeForce3's rendering of Final Fantasy had the geometry turned way down, for a very short sequence, with very limited shading effects. If the NV30 can do close to full geometry (enough for pixel-subpixel geometry at the chosen resolution...which does mean less than the movie used), and full shading effects, for a decent sequence, then I'll be happy. This may be possible...but it would be hard to get it to run well on the NV30.

Update:
And, of course, if the NV30 can indeed do Final Fantasy-quality rendering, then it may be important for nVidia for more than just PR. If just one of these chips can actually get somewhere close to rendering a real Final Fantasy movie frame in realtime, just imagine if somebody (Quantum3D?) produced large arrays of these chips specially designed for offline rendering. The performance increase for these farms could be phenomenal.
 
[In Dana Carvey's impersonation of George Bush voice]

Not gonna happen', not gonna doo it.

[/In Dana Carvey's impersonation of George Bush voice]

:LOL:
 
So

How about the Balrog Sequence or The Last Alliance of Men and Elves from FOTR? Supposedly ATi had a facsimile of that running in real time.

for me FF:TSW was just a long cartoon (albeit a very impressive one), while FOTR was a true masterpiece.
 
Gollum said:
I wish they would stop claiming that, its getting old... :rolleyes:

jandar> AFAIK Kirk is Nvidia's chief scientist, not a PR guy
Depending on the nature of conversation, all staff can be a PR person. You approach someone with the outright intention of an "Interview" and they are already on "PR standby status". Look at all my interviews with Kirk - it's all PR-friendly replies by him. But look at my emails with him when he knows it's not for an interview and his replies are less PR-friendly.

Usually, though, the higher up the chain of command someone is, he is more PR-inclined. Talk to a regular driver engineer and it's less so.
 
Yes once nVidia can render Powerpuff Girls in realtime we know their videocard is powerful enough for DOOM3 and anything else!
 
I find it hard to believe - like all the skeptics. I understand h/w OpenGL calls are going to be alot faster than general distributed s/w of fast CPUs which is all a rendering farm is really. But 50,000 times faster - initially I thought not.

But remember John Carmack himself said just a few months back it will be possible sometime in 2003 - most likely in late 2003. JC knows what he is on about. While folk said its 20 years away they were using s/w running on a 100+ fast CPUs vs specialised parallel H/W in a GPU. I know the arguments, the joy is in 2 weeks we may get our first glimpse - and within a year the winners and losers in this bet should be clear.
 
Back
Top