NVIDIA Kepler speculation thread

Apparently this is the first review of the 670, so enjoy ;D.

http://www.ozeros.com/2012/05/review-nvidia-geforce-gtx-670/

Its in Spanish but number are numbers and talking about they, its funny the way the 580 is almost at the 7970 performance(specially on RD5 where in Kitguru´s 2 months old review the 580 is below the 7850 and I don't think the OC it has could scale that much 0.0 ) and the 670 is head to head to the 680.

Thanks.
So, actually it's not the first time things like this happen. In previous generations we also observed cases when some supposedly lower performing parts are at the same, higher or negligibly lower level. NV knows how to play its cards.

I hope we will see a new pricing reduction from AMD- 7970 at 350 if 670 is 400. :LOL:
Come on.



Some posters in this thread don't seem to understand the basic difference between "yield" and "capacity".

AMD and Qualcomm appear to be experiencing good yield but low capacity. Nvidia are saying they are seeing poor yield and low capacity. The question is why the difference.

Something to do with this?

At the GPU Technology Conference in 2010, Jen-Hsun Huang explained; "We found a major breakdown between the models, the tools and reality. So when we first got the first Fermi back, that piece of fabric, so imagine we're all processors, all of us seem to be working. But we can't talk to each other. It's like we're all deaf, we're all mute and deaf. And we found out it's because this connection between us is completely broken. It turns out the reason why the fabric failed isn't because it was hard, but because it sat between the responsibility of two groups. The fabric is complicated in the sense that there's an architectural component there's a logic design component and there's a physics component. My engineers who know physics and my engineers who know architecture are in two different organisations, and so you see this underlap of responsibility... 'is it my job or your job?' If you'd just simply moved it from one side to the other side they'd been more than happy to pick up the slack, but we let it sit right in the middle. 'Let's be both of our jobs'... that's a bad answer."

Read more: http://vr-zone.com/articles/how-the...-of-nvidia-reshaped-/15786.html#ixzz1uSw1N0tX
 
Last edited by a moderator:
TR's review of the 690 is up. Nvidia made some progress with the frametime metering fix for microstutter. It's looking better for the 690 compared to all other AMD/NV multi-gpu setups, however, still not as consistent as single-gpu. Still a marked improvement. GJ Nv!
Anyone know if they delay the actual rendering of the frame of just its display after it has finished rendering?

In the latter case, they've mostly just masked the possibility of measuring the "error" rather than eliminating it.
 
Anyone know if they delay the actual rendering of the frame of just its display after it has finished rendering?

In the latter case, they've mostly just masked the possibility of measuring the "error" rather than eliminating it.

According to the articles from TR on this matter, the delay is added after the frame has been rendered, before it's sent to display.

And while of course reducing microstuttering is great, the added inputlag isn't that great.
 
And while of course reducing microstuttering is great, the added inputlag isn't that great.
But does it really decrease stuttering then? Certainly, it eliminates the zig-zag in the framtime graphs used for measurement, but the uneven movement is still there and it introduces an additional error in the temporal dimension.

Imagine we're rendering the seconds hand on a watch with continuous movement at 1fps (or 60x real time and 60fps). Ideally we'd like frames at the 1.0, 2.0, 3.0, 4.0, 5.0 second markers etc., while old style micro-stuttered rendering might give us frames at 1.0, 1.5 ,3.0, 3.5, 5.0 seconds and so on.

If Nvidia is only taking the 1.5/3.5 frames as rendered and displaying them at 2.0/4.0 seconds; how is that any better? You still get a long-short-long jump between the displayed frames, but additionally one frame is now displayed at the wrong time compered to the game world.
 
Last edited by a moderator:
Some posters in this thread don't seem to understand the basic difference between "yield" and "capacity".

AMD and Qualcomm appear to be experiencing good yield but low capacity. Nvidia are saying they are seeing poor yield and low capacity. The question is why the difference.

NV does not say it has porr yield and low capacity. They said, that capacity is not as good as they want it and yields are also not as good as they want it. Now that does not mean yields are bad. If you are limited by the number of wafers, even ok yields are not enough to meet the demand.
 
According to the articles from TR on this matter, the delay is added after the frame has been rendered, before it's sent to display.

And while of course reducing microstuttering is great, the added inputlag isn't that great.

Have you had firsthand experience with GTX 690 to make such claim?

If anything, it shouldn't be any worst than input lag caused by standard v-sync because the delay has sub-frame-time duration (half frame time max at unlikely, worst case scenario) and is only happen every other frame.
 

Here we noticed that the GeForce GTX 670 once again ramped up beyond its default boost clock speed of 980MHz by recording a peak clock speed of 1097MHz.

Forgive me but I'll contain my enthusiasm until I see what actual retail samples can do. Plus:

AMD Radeon HD 7970 3GB GDDR5 (AMD Catalyst 12.2 Preview Version)
AMD Radeon HD 7950 3GB GDDR5 (AMD Catalyst 12.2 Preview Version)
 
If Nvidia is only taking the 1.5/3.5 frames as rendered and displaying them at 2.0/4.0 seconds; how is that any better? You still get a long-short-long jump between the displayed frames, but additionally one frame is now displayed at the wrong time compered to the game world.

Not necessarily. Frame rendering rate does not always directly correlated with game state update rate. A proper game engine updates the game world ad constant tick speed, regardless of FPS.
 
Not necessarily. Frame rendering rate does not always directly correlated with game state update rate. A proper game engine updates the game world ad constant tick speed, regardless of FPS.
That's what I meant. Say you're aiming at the hand of my imaginary clock; by the time you're shown the 1.5 second image, the game world is already at the 2.0 second stage.
 
Forgive me but I'll contain my enthusiasm until I see what actual retail samples can do. Plus:

The card is a real bijou. :love:

The card is a very nice performer absolutely capable for 1920x1080/1200 and the newest games, which is the sweet spot these days and really, performance is not that far off from the GTX 680 at all. If we look at the Radeon 7970 series we also see that most of the time the GTX 670 definitely is the stronger product (with an exception here and there). So from the single GPU performance versus product point of view the GTX 670 manages to do really really well.

http://www.guru3d.com/article/geforce-gtx-670-review/25
 
The way I see it , AMD feels threatened by NVIDIA's latest offers , so they play the availability card , They poke Charlie to write some of his "glamorous" pieces , and they have Mr.Dave here in the forums raising the same questions and doing pretty much the same thing , albeit with much more elegance .

That is really the only thing that explains to me why we are even discussing this absolutely useless and trivial issue .

Yes and it goes so far that they even convinced JHH himself to state yield issues in last quarters CC! Gotta hand it to the AMD black-ops guys. :oops:
 
Wow the 670 looks like a real killer. Really close to the 680 in performance. I have to say I'm not as happy about my 680 purchase now. I just could't see it being that close...
 

So those tests are made by adjusting the power slider individually per game between 75% and 91%? (or did you set a fixed value from those findings)
What is nvidia really promising? that it will sometimes be able to hit 980, or that it's the average clock in normal games?
Having the test sample going 100mhz higher than specified seems quite suspicious.

And btw, has anyone tested how much it affects performance to take the card inside a normal case instead of the open test bench?


Btw, indices from the usual suspects:
http://ht4u.net/reviews/2012/nvidia_geforce_gtx_670_im_test/index39.php
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_670/28.html
http://www.hardware.fr/articles/866-18/recapitulatif-performances.html
 
I find the closeness of the two SKUs notable as well.
Nvidia doesn't seem to think the 670 will cut into the 680's numbers, or it doesn't care too much if it does.

The cooler's noise levels have taken a hit according to TR, though.
 
Back
Top