NVIDIA Kepler speculation thread

In your initial contrived example, you'd end up with a 99% percentile of 120fps. If you had used their methodology with the same contrived data, you'd end up with 1fps. Which very well reflects what a user would be experiencing. It's fair to say that your initial takedown was just a little bit flawed and a somewhat weak foundation to call their methods, and I quote, "one of the worst cases of marketing science."

I want you to look at their graph. They have labeled one of the columns. That label is "average frames/second". That is typically abreviated FPS. If you check the top of their plot, you will note that they themselves label their graph that. In other words my "initial takedown" was based on the graph they provided. I gave them a rebuttal in the context of the plot they provided. That is pretty typical in science.

You tried to claim the problem didn't exist because I hadn't used a hidden metric rather than the actual presented metric. I simply pointed out the obvious - the problem will continue to exist regardless of the metric you use. That is because the problem is with the aggregation they chose for summary.

However, if you need to select one and only one number that has the best chance of representing what a user will experience, I think it's a pretty good proxy.

Now be constructive and come up with a better single number performance metric that can be used to calculate !/$.

Go back and read my first post. You will discover that I suggested not one - but two summary statistics that would better make their point. Interestingly enough, one of those was provided and you will note I had no problem with the second plot.

That being said, I think they are still using figures that are insufficient to make the case they want to make. I don't mind as much because the figures aren't plain wrong (as is the case with the percentile example) - they are just insufficient. There are a lot of plots that could have been done that make their point. The following would be one of them:

b8rdif.png


Here I have plotted the price vs. average performance (taken from techpowerup's site - their aggregate performance) of various NVidia and AMD cards today. I colored the AMD cards red to differentiate them from the NVidia cards. I colored the 780 OC edition blue. I then fit all of the data to an exponential function that fits the data pretty well (R^2 of .95). The prices came from Newegg as of today (I selected the lowest price available for each card in the techpowerup list).

This line is actually interesting. If you are above the line, then you are getting more performance for your dollar. If you are below the line, you are getting less. This paints the 780 OC in pretty decent light.

Of course, the problem is that AMD just has nothing to compete. So NVidia tends to define the line up here - and the 780 regular and Titan fall right on the line. Actually, it is uncanny how close they are. If I had to bet, I would guess that NVidia did a similar analysis and set their price accordingly.

In any case, the plot demonstrates a couple of things. First, the increase in performance for price tails off dramatically in the high end segment. It is almost linear at first, but then drops off rapidly. Second, NVidia actually isn't as bad off as some people are making them out to be. For the record, I expected NVidia to perform horribly in the upper segments. I was pretty surprised when they didn't. I would attribute this more to the lack of AMD cards up here to help solidify the segment more than to any silicon based success for NVidia though.
 
Last edited by a moderator:
650$ is a joke - you get equal performance in the 7970GE for 40% less.

What? I dislike this partizan rubbish (from both sides) but I dislike bald face lies more. You do not get equal performance, the 7970GE is slower across the board, end of story.

*I'm not saying that's a bad things since it's also cheaper and thus potentially the better option.
 
I want you to look at their graph. They have labeled one of the columns. That label is "average frames/second". That is typically abreviated FPS. If you check the top of their plot, you will note that they themselves label their graph that. In other words my "initial takedown" was based on the graph they provided. I gave them a rebuttal in the context of the plot they provided. That is pretty typical in science.
Yeah, they start their article by saying you should read their initial Sept '11 article. For the final graph, they convert the 99% frame time back to FPS because it makes it easier to compare against the avg FPS number. If you're not familiar with the methodology they've used ever since, I can see how this can be confusing.

But scientific papers always build on a history. They pointed to that history. You didn't read it and made the wrong conclusion.

Go back and read my first post. You will discover that I suggested not one - but two summary statistics that would better make their point. Interestingly enough, one of those was provided and you will note I had no problem with the second plot.
Yes, and it's really simple to come up with a contrived example that breaks your metric. But that aside: your metric would be equally terrible at exposing stutter issues as avg FPS. The whole premise of the TR is based on the notion that stutter is bad. If you don't agree with that, fine, argue that stutter is not important. Just don't call them idiots based on a difference in opinion.

Also, TR is using 2 metrics: 99% and avg FPS. They point out different things. It'd be interesting to see how much additional variation your metric would give, but I think it will simply fall much closer to the avg FPS without exposing stutter. Exactly what is the added benefit of a second metric if it doesn't expose any additional and substantial flaws? It's just redundant.

People have argued that the 99% percentile is too generous, and that it should be increased to punish stutter even more.

as is the case with the percentile example)
Only if you don't care about stutter. I find them very useful and I'm clearly not alone.

Here I have plotted the price vs. average performance (taken from techpowerup's site - their aggregate performance) of various NVidia and AMD cards today. ...
Yes, interesting graph.
 
What? I dislike this partizan rubbish (from both sides) but I dislike bald face lies more. You do not get equal performance, the 7970GE is slower across the board, end of story.

*I'm not saying that's a bad things since it's also cheaper and thus potentially the better option.
$380 (for the 7970) is a joke - you get equal performance in the 660 Ti for 30% less.

Repeat the same logic ad naseum, and it's a still patently stupid argument. Even if we assume that he's talking about overclocked performance, it doesn't make the slightest amount of sense. The 780 can be overclocked as well. It's a tired argument that's been proven to be embarrassingly flawed over and over.

High end cards invariably offer lower performance per dollar than mid range cards. We might as well start saying that any card that isn't a 6670 - the current performance per dollar king charted on TPU - is a ripoff.

Great idea.
 
6670 isn't that great, you have the ddr3 model which is slow (but cheap) and the gddr5 model that is great but a 7750 gddr5 is not much more expensive. Actually, GTX 660, 7850, 7870XT do a great job at performance per dollar. Considering the 6670 is better would be a fallacy if that perf/dollar figure is just +10% higher (I don't know the figures), at this level such a difference can be considered insignificant and simplified away.
 
Very interesting picture.
I've made a corollary, which roughly says "don't buy a card over $200" if you're short on money.

ada9ac595adeb2438de84f4c72e406540f4c534c.png

Which site(s) data did you use for the performance charts?

There's few cards there which I'm quite sure I can put my finger on what they are but which don't reflect their performance based on the reviews I've seen
 
Xalion says he took the data from techpowerup. (you can read this up right below the original picture)
Sure, the data can be a bit flawed (it's always flawed, given how much variation there is between games and even sections of a game), and techpowerup tests on a wide range of resolution + AA settings. If the averaging of all results is given this may give a different result than a 1920x1080 only review for example. (or one that does 1920 + 2560 tests)
 
Whoops didn't even notice it was Xalion - but regardless, for Xalion then, you apparently used the "all resolutions" which while does represent just that, is something that in general should be ignored, "no-one" uses 1280x800 or even 1680x1050 anymore on midrange or higher discrete cards, which twists the performance metrics on anything over, say, 7800/650 Ti Boost -range.

The problem is of course then "which resolutions to use on which card", but for example using just the 1920x1200/1080 is better representative for most (From Steam under 4% use 1280x800 and under 9% 1680x1050, while 1920x1080 is used by over 30%)
 
Don't forget Techreport has a penchant for discarding AMD-centric games like Dirt Showdown from the graphs because it's an "outlier" (50% faster), but quite happy to include games like Assassin Creed 3 when it throws up a result like 80% faster for the 660 Ti vs the 7950.

Woops that's Techreport not techpowerup. I agree with Kaotik that cards should be put into their respective resolution segments and 1080p is almost the minimum for anything above a 7850 these days.
 
Last edited by a moderator:
The problem is of course then "which resolutions to use on which card", but for example using just the 1920x1200/1080 is better representative for most (From Steam under 4% use 1280x800 and under 9% 1680x1050, while 1920x1080 is used by over 30%)

There some variation in resolutions in the low end/older monitors.
You should probably lump together 1280x800 and 1366x768, 1280x1024 and 1440x900, this gives you higher percentage. Even more if you add up those four res together.

So with Steam hardware survey's numbers I get
- 1024x768 : 3.6%
- 1280x800 and 1366x768 and 1360x768 : 26.8%
- 1280x1024 and 1440x900 : 15.2%
- 1680x1050 : 8.2%
- 1920x1080 : 30.7%
- 1920x1200 : 2.9%
- 1600x900 : 7.6% (I hadn't noticed it)
 
Yeah, they start their article by saying you should read their initial Sept '11 article. For the final graph, they convert the 99% frame time back to FPS because it makes it easier to compare against the avg FPS number. If you're not familiar with the methodology they've used ever since, I can see how this can be confusing.

But scientific papers always build on a history. They pointed to that history. You didn't read it and made the wrong conclusion.

Let me make this really simple for you - because you obviously still do not understand why what you are saying is wrong.

I have two numbers - taken from a random test on TechReport. These were in frame times, and have been converted to FPS. Please tell me which card has the best average behavior on the game they were taken from:

Card A: 58.48
Card B: 56.18

Now, answer the following questions:

A) Which card had better average performance on this game?
B) Which card actually displayed micro-stutter on this game?
C) How perceptible are the differences according to the reviewer?

You will quickly realize that from these numbers, you can't answer any of them. This example is amusing, because one of these cards does actually perform much better on average than the other, and one of these cards does actually display some micro-stutter. It is actually pretty much exactly the scenario I set up in the first post.
 
Last edited by a moderator:
Here is 1920x1200. I included the fit this time in case people were interested. I'm willing to do other resolutions if people want - now that I've found the price of the cards the rest is just entering a few numbers and fitting a line. I'm not really trying to hide anything or make a point for one vendor or the other - I was just trying to show what I consider a better way to talk about performance vs price. My initial choice of all resolutions was for convinience, not out of a deisre to pull a fast one. I should note - the R^2 is much worse on this, but that is due to modifying the function slightly because of a correlation problem that was preventing convergence.

2s0ig6b.png
 
What? I dislike this partizan rubbish (from both sides) but I dislike bald face lies more. You do not get equal performance, the 7970GE is slower across the board, end of story.

*I'm not saying that's a bad things since it's also cheaper and thus potentially the better option.

Bald face lies are always the worst (just kidding :D ) . Anyway, EVGA has outdone themselves with the $659 GTX 780 SC ACX, and the performance improvement vs. GTX 680 and HD 7970 GHz Ed. across a wide variety of games is staggering: http://forums.anandtech.com/showpost.php?p=35059456&postcount=526
 
Don't forget Techreport has a penchant for discarding AMD-centric games like Dirt Showdown from the graphs because it's an "outlier" (50% faster), but quite happy to include games like Assassin Creed 3 when it throws up a result like 80% faster for the 660 Ti vs the 7950.

Woops that's Techreport not techpowerup. I agree with Kaotik that cards should be put into their respective resolution segments and 1080p is almost the minimum for anything above a 7850 these days.

[semi-ot]TPU disables TressFX in Tomb Raider btw, W1zzard promised to look into it because no other game is "discriminated" by disabling something that works for everyone but benefits the other, hasn't changed it yet though[/ot]
 
Let me make this really simple for you - because you obviously still do not understand why what you are saying is wrong.
Obviously.

I have two numbers - taken from a random test on TechReport. These were in frame times, and have been converted to FPS. Please tell me which card has the best average behavior on the game they were taken from:

Card A: 58.48
Card B: 56.18

Now, answer the following questions:

A) Which card had better average performance on this game?
B) Which card actually displayed micro-stutter on this game?
C) How perceptible are the differences according to the reviewer?
Card A has a better 99% percentile performance than Card B. ;)

It's really that simple.

You are trying to coerce a root cause analysis of a summary number. That's pointless: if you want to root cause the problem, why stare at a single number when you have a complete data set? The 99% number gives a rating that takes into account and penalizes certain bad behavior.

When my daughter comes home with a mediocre average grade, I don't try to figure from a single number what went wrong: I look at the individual grades if I want to understand her strong and weak points.

You can come up with as much single metrics as you want, none of them will answer all of your questions at the same time.
 
6670 isn't that great, you have the ddr3 model which is slow (but cheap) and the gddr5 model that is great but a 7750 gddr5 is not much more expensive. Actually, GTX 660, 7850, 7870XT do a great job at performance per dollar. Considering the 6670 is better would be a fallacy if that perf/dollar figure is just +10% higher (I don't know the figures), at this level such a difference can be considered insignificant and simplified away.
I was basically doing a reductio ad absurdum. The point I was making is that if seahawk believes that performance per dollar is the only metric that matters, and that the 7970GE is better than the 780 based on that sole metric, then clearly every card that is not a 6670 (the best performing card per dollar) is a "worse card" to seahawk than the 6670.

As you and I both agree, that's ridiculous. I doubt seahawk would agree with his own logic either, had he actually put some thought into what he was saying.
 
Back
Top