Final Fantasy & NV30

no_way said:
they should try Tron first perhaps :p

LOL :LOL:

Actually, that makes very good sense. That bloke in the car going around 90 degree corners at 100mph would definately feel the GeForces! :p
 
That picture looks identical to the FF demo at Siggraph 2001. I have a screenshot from the same figure one minute.
 
And the thing is who cares? Even if it were able to render FF, you wouldn't be able to play a game like that. It would run far too slow. Furthermore no game can be as intricate as a movie ever. Games just aren't linear enough, you have to concentrate too broadly on them to be able to make them as spot-on accurate as a movie which only needs to render certain places.

Whether or not the NV60 or whatever can render FF in 5 years, games won't look like that for another 5-15, if ever.
 
It matters to people building RenderFarms. If you can spend $50k on GPUs and match a $1million renderfarm (small by industry standards), but you'd be very interested. GPUs can be used to accelerate offline rendering as well.

Higher flops means higher server rack density, less power consumption, lower over all cost for each rendering node.
 
the problem i see.. Is that Ati says *look the 9700 can do 325 million PPS with our quad Vertex engine*.. And you can see that it makes believeable sense.. Or that It has Quantifiable stats for its Pixel pilpelines etc..

Nvidia is releasing some total nonsense about 51 Gflops, and 100 jurrasic park dinos at 100 fPS.. whle in the same slide show claims its only 2x Faster than a Gf4... :rolleyes: (which we know is only going to be true with FSAA)

Give me a freaking break. Its never going to approach 51Gflops.. its jsut as ludicrist as teh 16 Gflops of the Riva whatever it was.. And its never going to be able to render 100 Turely Jurrasic park dinos at 100 fPS. This is what pisses me off. Its complete and utter nonsense. They are twisting teh **** out of everything they can to be even remotely legit in even posting stuff like that.
 
ben6 said:
That picture looks identical to the FF demo at Siggraph 2001. I have a screenshot from the same figure one minute.

I think it seems pretty obvious that they didn't show any NV30 at that show, and they almost certainly will not show one until it's officially released in a couple of weeks.
 
Hellbinder[CE said:
]the problem i see.. Is that Ati says *look the 9700 can do 325 million PPS with our quad Vertex engine*.. And you can see that it makes believeable sense.. Or that It has Quantifiable stats for its Pixel pilpelines etc..
Really? When? Before the end of the NDA? Surely not, so what's your point? :-?
 
DemoCoder said:
It matters to people building RenderFarms. If you can spend $50k on GPUs and match a $1million renderfarm (small by industry standards), but you'd be very interested. GPUs can be used to accelerate offline rendering as well.

Higher flops means higher server rack density, less power consumption, lower over all cost for each rendering node.

Why does this require the chip to render anything in Real Time (tm) though?

I don't think gaming cards and rendering will ever converge to be honest. Maybe ATi and Nvidia will build farms outfitted with the same GPUs as their gaming cards, but they won't render in real time individually or together.

The main thing is: Film rendering doesn't require real time rendering, and it's not practical to make games as intricate as films (and most special effects can be simulate "well enough" using different, less intensive methods).
 
Nagorak said:
I don't think gaming cards and rendering will ever converge to be honest. Maybe ATi and Nvidia will build farms outfitted with the same GPUs as their gaming cards, but they won't render in real time individually or together.

I disagree. I think eventually, pretty soon I bet, you will be able to buy a gaming card that is also found in a professional workstation.
Rendering that was once done on the cpu, or a whole bunch of cpu's will now be done on very cheap (relatively speaking) graphics chip or chips at a faster pace.

It doesn't mean it will be in real-time, but it will definitely be quicker than using a cpu.

They will converge.
 
I think Sony GS Cube, was the first to rendered a scene from Final Fantasy in real time. It was the scene where Aki floats in that zero gravity room. It render in high res too. I am pretty sure its downgraded from the movie quite a bit, but it looks pretty good, much better than the NVIDIA one.

Yes you are right. real-time FF movie scenes were first rendered on GS Cube. I think it was the 16 EE+GS version first. And it was at interactive framerates, maybe 60fps or at least 24fps. MUCH MUCH higher than Nvidia's 2.5fps on GF3/NV20. It did look really good. It had much higher polygon count too, although still far simpler than the movie in terms of effects and geometry.

Though that GS Cube, had a monster combined bandwidth and filrate, I am not suprised if they actually just multipass the scene many times.


yeah, talk about MASSIVE, MASSIVE bandwidth, vertex and pixel rates.
And 32MB of eDRAM on each GS. (32MB * 16 GS = 512MB embedded memory!) plus several GB of main memory. and that was just the 16 chipset version. the 64 EE+GS version increases everything by 4x. and even that's still not nearly enough to do FF movie in realtime.

PS3 will probably be more powerful than the 64 EE+GS GSCube, but have less eDRAM.

Back to top
 
megadrive0088 said:
Yes you are right. real-time FF movie scenes were first rendered on GS Cube. I think it was the 16 EE+GS version first. And it was at interactive framerates, maybe 60fps or at least 24fps. MUCH MUCH higher than Nvidia's 2.5fps on GF3/NV20. It did look really good. It had much higher polygon count too, although still far simpler than the movie in terms of effects and geometry.

According to reports on the show floor, it was closer to 15 fps, not 2.5 fps. There's a big difference.
 
This FF stuff is usually a big can of worms.The purpose of agraphic card is not to equal a huge render-farm in raw muscle, but to out-smart it by cutting corners in an elegant way. Sure, you won`t do what the authors of the movie did, but you can achieve pretty close results by making use of things like DM, PS, VS, shadow buffers etc. I see people posting numbers related to the render-farm, but failing to see the truth-VPUs are not only about brawn, but also brain ;)
 
Back
Top