NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
I doubt that. If that missing MUL was recoverable with small revisions that don't make up a new architecture, it would have been recovered in newer editions of G92 such as 9800GTX. Especially considering that GT200's design was apparently completed last year.
 
So if that's such a big part of R700... Why isn't anyone talking about it? Why is no rumour mentioning it? Why is no 'leak' from ATI docs heralding it as the next best thing since sliced bread? I'm not saying it's not true; I'm just skeptical, and wouldn't be surprised if that rumour was, let us say, outdated.

And Lux_, uhhhh, I have no idea why you think it's even theoretically possible NOT to render his frame from scratch? (at least for the main framebuffer)
linky poooh
 
I wonder what changes have been made to the ROPs as per the pcinlife post...

I wonder that too - except maybe for sheer numbers and things not directly related to gaming performance, which gamers usually will therefore not notice.


As to comparing RV770 to G92: Why not? They can't have exact numbers of GT200 as of yesterday (fwiw - they will not have those even on G92b) plus GT200 seems clearly way above their targeted price point.
 
Actually it's not impossible. It's actually quite easy, as with 3d data, you have the exact information about the motion vector of each pixel. You know the movement (and acceleration) and rotation (and angular acceleration) of your objects and your camera. With this information you can do cheaper and more correct motion estimation compared to the current codecs and HDTVs. The biggest problem comes with the frame latency and stuttering, as we have to do this on real time. The motion estimated frames are much cheaper to render (around 10x in my testing scenario) than the real frame. This causes noticeable stuttering, unless I queue the frames. Queuing frames however causes noticeable control latency (much like AFR SLI setups).
So it's quite easy yet causes stuttering or control latency. Sounds like it's not so easy at all. And how does motion estimation compensate for lighting and perspective differences?
Also on deferred rendering systems you can easily motion compensate only your g-buffer creation, while recalculating the lighting on every frame. But if you only calculate one extra frame between the rendered frames (like I do), the lighting error between 2 frames is not usually noticeable. On 3d rendering however, you can detect these "non-usual" scenarios. If your camera moves too much in one frame or some light turns off or on, you can do the motion estimation more precisely on that frame (or just render a real frame instead).
In any event, I disagree with your claims. If the framerate is high, the deltas between frames may be small, but if framerates are so high, then you probably don't need to use this mechanism at all. If framerates are low, then the deltas will be higher, causing differences in lighting, position, etc. to be more noticeable. Also, the data stored in the frame and depth buffers are only approximations to what was actually sent down the 3D pipe. How would you get reasonable antialiasing by reusing data from a previous frame?

Films have an advantage that makes then more amenable to MPEG type compression: Motion blur. Since things in motion are blurred anyway, then you can get away with interpolation. In 3D graphics, motion blur is (currently) a post-processing effect, not a natural feature of the rendering process.

Lux_ said:
I agree that as of today, there probably are limitations in current APIs and hardware, that make it not worth the effort - how to manage a some kind of a general data structure that keeps track of changes, how to sync it between GPU and CPU.

Yet, this approach is already in use in smaller scale. For example (if I remember correctly), Crysis uses extrapolation for some lightning calculations, and recalculates only in N frames or when something significant happens. Also instancing is different face of the same cube.
Sure, you can choose not to update some render-to-texture effect, but then you're sacrificing some quality to speed up performance.

If the quality isn't identical, then it's not relevant. There are plenty of ways to speed things up if performance is all you care about; how about applications use simple shaders/textures every other frame? If quality is your concern, and it should be since graphics cards are expensive, then you shouldn't settle for compromises on image quality.

-FUDie
 
They couldn't; but that doesn't remove the fact that comparing your up and coming products with 8-month old GPUs is going to make you look like you're desparate.

The idea is to make people wait for the new product rather than buy the 8-month old competitor product.
 
According to Michael Hara and Daniel Vivoli at the JP Morgan Technology Conference last week, Nvidia expects to keep using G92 (in the guise of G92b) for the next 6-8 to 12 months. They also seemed to imply that the main interest of G92b was to improve yields, but I am basing this impression on a journalist's article in a different language so I could be wrong there.

Also, I believe that if the 55nm chip was anything else than a straight shrink they would have called it something else. They have 3 codenames for their new high-end chip and at the same time they would share the same codename for distinct performance chips? I know they like to fudge the picture, but there must be a method to the madness.

My point is that ATI's comparison of RV770 to G92 is entirely valid, if that card is going to be the competition for the next 9 months. Now 260 and 280 are a different game altogether, but they have a different player for that.
 
According to Michael Hara and Daniel Vivoli at the JP Morgan Technology Conference last week, Nvidia expects to keep using G92 (in the guise of G92b) for the next 6-8 to 12 months. They also seemed to imply that the main interest of G92b was to improve yields, but I am basing this impression on a journalist's article in a different language so I could be wrong there.

Also, I believe that if the 55nm chip was anything else than a straight shrink they would have called it something else. They have 3 codenames for their new high-end chip and at the same time they would share the same codename for distinct performance chips? I know they like to fudge the picture, but there must be a method to the madness.

My point is that ATI's comparison of RV770 to G92 is entirely valid, if that card is going to be the competition for the next 9 months. Now 260 and 280 are a different game altogether, but they have a different player for that.

This is where 90 percent of my mind-set is but there is that 10 percent of doubt that only official data may cure.
 
According to Michael Hara and Daniel Vivoli at the JP Morgan Technology Conference last week, Nvidia expects to keep using G92 (in the guise of G92b) for the next 6-8 to 12 months. They also seemed to imply that the main interest of G92b was to improve yields, but I am basing this impression on a journalist's article in a different language so I could be wrong there.
*cough* frontpage *cough*. I know it hasn't been very active but still, it's not like I didn't generate a news thread for it... :) AFAICT, there was no mention of G92b whatsoever, so that article's author must have been merely speculating.
 
*cough* frontpage *cough*. I know it hasn't been very active but still, it's not like I didn't generate a news thread for it... :) AFAICT, there was no mention of G92b whatsoever, so that article's author must have been merely speculating.
Oh, there's a frontpage... I sometimes forget to check it. Looking at it now.
...
Well, how is what you wrote there different from what I stated?
Changing the SKUs (and/or moving to the 55nm G92b) would also seem like it could help by making some targets less strict.
You also seem to believe G92b is associated with improving the yields.That's where Nvidia's problem is, not performance, so why would they go with a more complicated/different design to improve performance at the risk of hurting yields again?
On G92 (and presumably G92b), Hara claimed 'that product has a lifecycle of another 6-8 to 12 months'
What I used as my main argument.
 
*cough* frontpage *cough*. I know it hasn't been very active but still, it's not like I didn't generate a news thread for it... :) AFAICT, there was no mention of G92b whatsoever, so that article's author must have been merely speculating.


That is to say that until the whole stock of G92s are consumed, G92b will not be available immediately, even after the launch of RV770.
 
If its just a straight shrink ;), it could have the same changes the gt200 has for its ALU's.

Now that would be interesting, but as we have evidenced G92 is already direly bandwidth bottlenecked, and that MUL might not really help out in anything.

It could, however, give nVidia the chance to shrink the chip even smaller with less units but with the MUL exposed.
 
Status
Not open for further replies.
Back
Top