NVIDIA Kepler speculation thread

Quite a few (full) nodes have been exploited to actually raise power over the previous generation in the past, tbh. Of course, this had more than anything to do with ever more complex chip layouts and thermal/power headroom untapped by the previous generation. With R600/GT200, this option became quite exhausted - and that's also the reason why we are not seeing any +100 perf. improvements over the immediate predecessor any more: You simply cannot add manufacturing improvements up with a higher power budget, but have to work extra hard to keep power in check, possibly leaving some performance on the table.

150 -> 130nm: NV20 to NV30, requiring a power connector
130 -> 90nm: NV40 to G80 - breaking the 100 watt barrier, same is true for R48x -> R520/580
90 -> 65nm: G80 to GT200, breaking the 200 watt barrier, same is true for R600 (80nm) coming from
R5xx.
65 -> 40nm: Approaching 300 watts in case of fermi, AMD starting to employ more conservative steps at to perf and power
40 -> 28nm: AMD has improved performance AND lowered power compared to Cayman, Kepler yet to be determined.
 
Which card did the 460 match in performance while being only $50 cheaper? The GTX 285 launched 18 months before the 460 at $400. The 460 was $230 and was a few percent faster than the 285.

5850 is some 5-10% faster and launched at $260 9 months before.
 
5850 is some 5-10% faster and launched at $260 9 months before.

Fair enough. That's still nVidia's mainstream chip vs AMD's high end of the same (40nm) generation. This is AMD's 28nm high end vs prior generation nVidia's 40nm high end.
 
Well the GTX460 was a performance product with very attractive pricing. This is not comparable to the HD7950.
I'm not saying, that GTX 460 is comparable to HD 7950. I'm saying, that GTX 460 offered "last year's performance for $50 less" just like HD 7950. In fact not even that - GTX 460 was slower than HD 5850, while HD 7950 is actually faster than GTX 580.
This is AMD's 28nm high end vs prior generation nVidia's 40nm high end.
Yes. But Nvidia doesn't have anything better at this moment, so it's a fair comparision.
 
Yes. But Nvidia doesn't have anything better at this moment, so it's a fair comparision.

If you say so. When the 460 launched AMD didn't have new chips on a next generation process launching in a few months but to each his own :)
 
The "last gen performance" argument is pretty stupid. Every card this gen but the 7970 (arguably 7950 with overclocking/3GB/slightly faster than 580) and maybe 1 or 2 Keplers is going to offer "last gen performance", point of fact. Yes that's right, the vast majority of Nvidia's next gen lineup is going to offer "last gen performance" too!

I guess as a criticism that 7970 is "only" ~25% faster than 580, OK. But if it's so bad Nvidia should be criticized for the fact 580 prices havent moved an inch. A 3GB 580 still costs the same as a 7970, that seems more inexcusable pricing than anything AMD is doing. And dont use the next gen argument, nothing stopping Nvidia from dropping prices right now. A cheap 580 could actually be a pretty sweet deal in the current environment, the sort of thing (great bargains on last gen cards) one would hope for just as much as "new gen should provide way more performance cheaper" axiom, too bad neither seems to exist anymore.
 
Apart from the fact that it's obviously highly convenient to keep the GTX580 prices high for NVIDIA, if the rumor is true that they've stopped producing GF110 chips since the end of last year a price drop would lead into higher demands, which would create a different kind of legitimate protests/complaints.

As things are AMD can enjoy its high margings from their Tahitis and keep at the same time demand under control and it doesn't look that much different for NVIDIA in that regard either. Haven't quite a few retailers stopped selling GTX590s? If yes it supports the rumor of GF110s being EOL recently.
 
Every card this gen but the 7970 (arguably 7950 with overclocking/3GB/slightly faster than 580) and maybe 1 or 2 Keplers is going to offer "last gen performance", point of fact.

That's simply stating the obvious. The problem isn't getting the same performance. It's getting year old performance for a year old price. Surely you didn't completely miss that point?
 
As things are AMD can enjoy its high margings from their Tahitis and keep at the same time demand under control and it doesn't look that much different for NVIDIA in that regard either. Haven't quite a few retailers stopped selling GTX590s? If yes it supports the rumor of GF110s being EOL recently.

I don't know about that EOL rumour. Due to Tahiti's pricing there will still be decent demand for the 580. Could there really be enough leftover inventory to absorb that demand till summer?
 
Just for the record my "last year's performance" was a friendly jab at trini. I dont agree with it and find it slightly amusing, sorry trini.
 
Just for the record my "last year's performance" was a friendly jab at trini. I dont agree with it and find it slightly amusing, sorry trini.

Lol I got that the first time. Thought it was pretty clear.

What don't you agree with though? The performance and price numbers are plain for everyone to see. What I'm speculating on is context, i.e. whether AMD knows they won't be forced to drop prices anytime soon.
 
Lol I got that the first time. Thought it was pretty clear.
:) Just clearing up the air for others who responded to my post.

What don't you agree with though? The performance and price numbers are plain for everyone to see. What I'm speculating on is context, i.e. whether AMD knows they won't be forced to drop prices anytime soon.
Well we have had plenty SKUs in the past that did the same - provide similar performance for slightly lesser price but with other benefits (new features, power consumption etc.). This isnt the first and wont be the last.

I dont see how the 7950 pricing is any indication of Kepler's perf/$ to be honest. If Nvidia were in the same situation, they would also priced their SKU the same way and dropped the price-drop bomb at the competition's launch day. Brownie points with users is probably not that high up the ladder compared to profit margins and the with all things enthusiasts - the price of entry or the cost of living on the bleeding edge is well .. costly.
 
I read that as, charlie just hucked a barrel of darts at the wall.

Hahaha yeah same here....he's trying to back peddle hard after being hoodwinked. Hardware acceleration of software physics libraries can't provide amazing performance boosts as games aren't currently CPU bound.
 
Well the GTX460 was a performance product with very attractive pricing. This is not comparable to the HD7950.

GTX 470 says hello. Slightly slower than a 6+ months old video card (5870) and a whopping 20 USD cheaper. Not even 10%.

I'd say that's pretty comparable to the 7950. Yet the GTX 470 was praised as a decent value proposition while the 7950 is lambasted.

How about GTX 480 compared to 5870? 10-15% more performance on average yet 130 USD more expensive. Or paying 35% more for 15% (being generous) more performance. That's surely a lot better than 7970 paying 10% more for 20% more performance?

Those comparisons are also apt from a new architechture POV comparing a new arch to an established and well optimised arch.

We're already seeing the beginnings of some work on getting up to speed with GCN with 5-20% perf increases depending on game in the last driver drop.

BTW - if you want to extend it out ot 1 year. How about GTX 570 (349 USD) compared to 5870 (~250 USD at GTX 570 launch)? Slightly faster but at a 40% price premium. Oh, but GTX 570 was fantastic even though it offered only a small perf increase yet cost 40% more. 7950 has a slight perf increase over GTX 580 yet it's 10% LESS.

[edit] Hell at this point we can do the same with GTX 580 compared to 5870. But if we do that, it makes the 7970 compared to GTX 580 look even better as the GTX 580 carried a 100% price premium over the 5870 when it launched.

Regards,
SB
 
A lot of backpeddling there, seems like Charlie was bamboozled was right after all.

The article still makes GK104's performance entirely unclear though.

It kind of bugs me how AMD seemed to have the superior architecture but now Nvidia is just copying it (no more hot clocks, rising raw teraflops getting close to AMD's level, alleged small/sweet spot chips etc).

In any case I guess die size will be important again and efficiency per flop less so.
 
While true, and while I don't believe there's any "dedicated physx hardware", there's no denying that there clearly need for something, since apparently GPU Rigid bodies, promised ages ago via APEX 1.2, still aren't available (well, weren't in November last year anyway)

Maybe they will release some new PhysX only cards, along with Keplers but in a different product line from Geforces.

That would be more than welcome if the new cards would perform as they should. Without bringing 50% performance hit on the rest of the pipeline that is. The performance hit should only be equal to the added particles/objects/whatever rendered on the screen and not what it is now, ie a leaf passing by the screen with Hardware PhysX enabled = framerate drops to half!

I only wish 1x PCIe 2.0/3.0 would be enough for those, so us dual gpu lovers, won't have to force one graphics card to 8X or less.
 
@Rangers

So "no hot-clock" is now a whiz-bang feature? Is it 2005 again? :) Btw, what gave you the impression that nVidia's efficiency/flop will drop with Kepler?

Maybe they will release some new PhysX only cards, along with Keplers but in a different product line from Geforces.

No thanks, that would harken back to Ageia days and make hardware accelerated physics even more of a niche and irrelevant technology. What they need to do is enable GPU acceleration of the entire PhysX library as Kaotik suggested.

That way we no longer have these stupid arguments over nVidia specific features. Developers can then use the API without worrying about adding specific effects for nVidia hardware. nVidia users will still have better performance of course (vs CPU) but it will avoid the inane arguments.
 
Back
Top