NVIDIA GF100 & Friends speculation

True story, at one point Pixar was getting odd errors, tracked it down to precision, and then went double. Never looked back.

speaking of Pixar.. where are those "Real Time Pixar like Graphics" we were supposed to be seeing by now.. sure Crysis is stunning and gorgeous (especially the environments) but still a long ways off from the promised holy grail if you ask me.
 
Even though not listed there, there's ATI STREAM plugin for Adobe Premiere Pro, too.
Though I think it's in beta-stage at the moment

Well, the Mercury Playback Engine might be CUDA only:

Question: "Are there any downsides to this technology?"

Answer: I guess it depends on your point of view but my 'net-net' answer is no, there is no downside. A more pragmatic and perhaps nuanced answer is 'depends'. If you're an ATI fan, you know that CUDA is NVIDIA products only and consequently, you'll either do without this feature or you'll switch to NVIDIA. Adobe has a valuable relationship with ATI/AMD and we're looking at things like OpenCL, which is a cross-platform GPU approach to what CUDA is.

http://blogs.adobe.com/genesisproject/2009/11/technology_sneek_peek_adobe_me.html
 

I saw that, but if you read through all the Q n' A further down in the link ( http://blogs.adobe.com/genesisproject/2009/11/technology_sneek_peek_adobe_me.html ) you get the impression that CUDA will be the only choice - at least from the start. It is not clear whether AMD will be supported before the OpenCL is done.

Anyway, I just wanted to point out that Fermi already is eyeballed from companies like Adobe.
 
speaking of Pixar.. where are those "Real Time Pixar like Graphics" we were supposed to be seeing by now.. sure Crysis is stunning and gorgeous (especially the environments) but still a long ways off from the promised holy grail if you ask me.

May be Metro2033 will get us there faster , both on the performance and visual side . :devilish::devilish:
 
There are no easy answers, in other words. Obviously game physics, as it is today, usually works and looks good most of the time. But it seems that it almost always finds a way to break down somewhere in games. Switching to double precision would get rid of that almost entirely.

The physics break down is almost always related to the primitive CPU (etc cpu speed) physics and the need to dumb things down. Even with DP u would encounter things like objects stucking in and doing strange movements or rigid bodies sliping slowly and looking very strange. Physic demos usualy use basic objects like cubes,cylinders,spheres or dumb down complicated objects to these. To create a more accurate behaviour (and working on any geometry) u would need to create a dense mesh of points on a single object and calculate the final force in each point and then apply these to the object. And now image i have 100-s of these objects. This has nothing to do with SP or DP.
 
maybe do you need some "semantic" way of avoiding problems : don't run a better simulation but have rules regarding materials and abstract objects - no, a corpse shouldn't be sliding at 0.01 m/s down a concrete alley, no a golf ball shouldn't accumulate 20 megajoules of kinetic energy because it's been stuck in a kindergarten playground's spinning thing. No, a revolving door shouldn't crush me to death if I action it myself :p
 
maybe do you need some "semantic" way of avoiding problems : don't run a better simulation but have rules regarding materials and abstract objects - no, a corpse shouldn't be sliding at 0.01 m/s down a concrete alley, no a golf ball shouldn't accumulate 20 megajoules of kinetic energy because it's been stuck in a kindergarten playground's spinning thing. No, a revolving door shouldn't crush me to death if I action it myself :p

Would be a fun reality if true :D
 
And you are making this very assertive claim based on... what exactly?

Based on the same that others used to assertively say that the GTX 480 has only 480 ALUs enabled, which has changed 10+ times already, along with the TDP...

Seriously now, aren't there enough hints that the GTX 480 has indeed all the SMs completely enabled ?
 
May be Metro2033 will get us there faster , both on the performance and visual side . :devilish::devilish:

Doubt it. I'm actually in agreement with what the CEO from Crytek said a while back. That only @ around 2012, a new revolution in graphics will appear. Until then, all we'll see is some graphics engines catching up with what CryEngine 2 already offered in Crysis/Crysis WarHead.
Metro 2033 looks good, but I'm not as impressed as I was with Crysis, although obviously I still need to play Metro :)
 
Doubt it. I'm actually in agreement with what the CEO from Crytek said a while back. That only @ around 2012, a new revolution in graphics will appear. Until then, all we'll see is some graphics engines catching up with what CryEngine 2 already offered in Crysis/Crysis WarHead.
Metro 2033 looks good, but I'm not as impressed as I was with Crysis, although obviously I still need to play Metro :)
I was merely joking , you missed the sarcasm in my comment , that is why I added the devil smiley !

I played the game (DX10) , it has nothing special actually , apart from some impressive lighting and dynamic shadows , texture resolution is nothing special , Crysis is far better in that aspect .

However the game does use a decent polygon count for the characters , they look very detailed .

The overall impression is : this is just another good looking game , it's graphics doesn't justify the massive performance hit , and the inability of modern GPUs to run it at max details .
 
I was merely joking , you missed the sarcasm in my comment , that is why I added the devil smiley !

Yeah, I've been missing a few of those lately :)

DavidGraham said:
I played the game (DX10) , it has nothing special actually , apart from some impressive lighting and dynamic shadows , texture resolution is nothing special , Crysis is far better in that aspect .

However the game does use a decent polygon count for the characters , they look very detailed .

The overall impression is : this is just another good looking game , it's graphics doesn't justify the massive performance hit , and the inability of modern GPUs to run it at max details .

That was my impression after seeing the trailers and screenshots. although I do need to play the game, since I don't want to judge something based on its cover only. It definitely looks good, just not as good as Crysis and from the little we've seen, performance is even worse than Crysis.
 
Latest Update: @VR-Zone
GeForce GTX 480 : 480 SP, 700/1401/1848MHz core/shader/mem, 384-bit, 1536MB, 250W TDP, US$499
GeForce GTX 470 : 448 SP, 607/1215/1674MHz core/shader/mem, 320-bit, 1280MB, 225W TDP, US$349
* The intended GF100 has 512 SP clocked at 725/1450/1050MHz with 295W TDP. It should still be released in the future but just not now. For this launch, GTX 480 has 480 SP with clocks lowered to 700/1401/1848MHz at 250W TDP.
http://vr-zone.com/articles/nvidia-geforce-gtx-480-final-specs--pricing-revealed/8635.html
 

> * The intended GF100 has 512 SP clocked at 725/1450/1050MHz with 295W TDP. It should still be released in the future but just not now. For this launch, GTX 480 has 480 SP with clocks lowered to 700/1401/1848MHz at 250W TDP.

So my earlier speculation of the GTX490 seems to be on the money.

I say the future for this faster part is when AMD releases the 5990.
 
Yep I doubt nVidia's strategy for this series goes much further than "beat the 5870 and get the fucking thing out the door asap". :LOL:
 
I saw that, but if you read through all the Q n' A further down in the link ( http://blogs.adobe.com/genesisproject/2009/11/technology_sneek_peek_adobe_me.html ) you get the impression that CUDA will be the only choice - at least from the start. It is not clear whether AMD will be supported before the OpenCL is done.

Anyway, I just wanted to point out that Fermi already is eyeballed from companies like Adobe.

Because err... Mercury was designed for GT200's revision of CUDA?
-Which the 5870 probably meets all (most) of the specs for in terms of cache flexibility and such?

To be honest though it's a month or two for OpenCL (and no marketing money! Bad, bad Adobe) vs easy CUDA code reuse. ;)
 
Back
Top