NVIDIA Kepler speculation thread

If they do it at a hundred watts less [strike]than 7970[/strike], nobody would.

580 owners like myself would most definitely consider it a joke if nVidia tried to palm off 30% more performance for $500, regardless of power consumption. This isn't a die shrink or tweaked chip. This is a next generation process and it comes with certain expectations.

Bar charts are great but I certainly don't sit at my computer pondering the amount of electricity it uses. What I do notice is performance, IQ and noise. The impact to my electric bill is so negligible it's not even worth a thought. I always wonder if the people who are overly conscious about 40-50w of GPU load power consumption are equally sensitive about all the other electrical devices in their homes.
 
I always wonder if the people who are overly conscious about 40-50w of GPU load power consumption are equally sensitive about all the other electrical devices in their homes.

Why shouldn't I mind about power consumption for other electrical devices? When I installed the airconditioning units at home I had the choice for cheaper non inverter ones and somewhat more expensive inverter units. Ask me if it was worth the added investment, while comparing electrical bills afterwards. And yes I have besides the gaming PC a very humble multimedia comp at home that operates 24/7 for browsing and multimedia stuff.
 
That's my point Ail. We have so many other electrical devices in our homes that 40w higher GPU load consumption while gaming is negligible. My entire computer setup uses less power than my home entertainment system (receiver+speakers+tv) and I spend way more time using the latter.

Also, my monthly electric bill costs less than a dinner date and I live in NY which has some of the highest electricity prices in the US. Granted it may be more expensive in other parts of the world but please forgive me for not being overly concerned with the pennies it costs to power a graphics card :)
 
Lol is that a serious question? How in the world is Tahiti vs GF110 in any way relevant to Kepler vs Tahiti? Or are you just taking the piss? :)
I'm taking the piss out of idiots who think that HD7970 isn't an upgrade from GTX580, while asserting that GTX580 is/was worth buying over HD 6970.

There's an amazing number of these idiots here.
 
I always wonder if the people who are overly conscious about 40-50w of GPU load power consumption are equally sensitive about all the other electrical devices in their homes.

Well, I don't game, I use my video card for video editing purposes, so I'm a completely different target, but I can answer this question for you -- yes. Anything I have that will be on 24x7 is scrutinized for noise and power consumption. It has driven HD choices, NAS choices, dishwasher, lighting, etc. I have gone around the house turning on and off appliances and lights and checked power use using my meter outside for big picture purposes. When I built my last PC, I spent a lot of time on silentpc. I most recently replaced my front lights with LEDs and then put them on light-sensors for good measure.

But for peak power, I mostly care about noise. What I really want is something that idles on a trickle, and then doesn't have a rocket-fan attached to it that takes off when it's being used. < 30db preferred. I'm currently using a HIS 5750 w/ iCooler IV. I'd love something with 4x+ more oomph, but from a pricing and noise perspective, I haven't been particularly motivated yet. We'll see.

So, now you know. I'll go back to lurking, and y'all can go back to your Kepler speculating :)
 
That's my point Ail. We have so many other electrical devices in our homes that 40w higher GPU load consumption while gaming is negligible. My entire computer setup uses less power than my home entertainment system (receiver+speakers+tv) and I spend way more time using the latter.

Also, my monthly electric bill costs less than a dinner date and I live in NY which has some of the highest electricity prices in the US. Granted it may be more expensive in other parts of the world but please forgive me for not being overly concerned with the pennies it costs to power a graphics card :)

That shouldn't mean though that some users don't mind power consumption in the recent years while judging a GPU. It comes down to how high the differences really are in the end. If X GPU would consume say 20% more power than a Y competing one which is only by 10% slower, I'd personally get the latter. Only if X would be noticably more silent it could sway me over.

Kepler is supposed to have a perf/W advantage; it remains to be seen when its family members launch if and to what degree and over how many members of the family it's really valid. If differences are relatively small I'll personally put them aside and will judge based on all the other factors.
 
It's my understanding (and I'd like to stand corrected by someone with real knowledge on the subject) that hotclocking ALUs might save you some die area but it shouldn't be for free either. In order to answer that question one would have to know what the gain for the hotclock accounted for more or less.
In general designs with the same number of flops but running at twice the clock will be smaller but not half the size. The higher clocked version will use more power though.
 
I'm taking the piss out of idiots who think that HD7970 isn't an upgrade from GTX580, while asserting that GTX580 is/was worth buying over HD 6970.

There's an amazing number of these idiots here.
Your logic is flawed at best.

580 came earlier, so anyone who bought it would have to downgrade to 6970. Anyone who waited for 6970 to launch and bought it afterwards probably had good reasons, because the difference was only 15%, so it didn't matter.

7970 is an upgrade over the 580, but not worth it. I expect at least 50% faster card, when I look for an upgrade.

Also, you're mixing an upgrade with worth buying, which isn't the same thing.
 
With half nodes gone, the race dynamics have changed. The lifetime of a particular chip will increase, as we've already seen last time. And there is less improvement possible if you stay in the same process.
So if Kepler is a decent architecture, the delay is not nearly as bad as it would have been in the past.

Your point's taken, but improvements are definitely there especially at the very beginning of a process cycle.

@trinibwoy:
Here in Germany, there has just been another round of increasing electricity prices. Starting from march, I'll be charged 23,34 €-cents (31 cents US) per kWh plus 6,55 EUR (8,61 US-$) a month. Now, having lived in northern to mid-Europe all my life, I've never been accustomed to having an AC running all day long for half a year or so. So I cannot assess, whether or not a wasteful GPU will make a real difference in a typcial US-household and I'll happily agree, that having specialized computers besides the gaming rig will help keep the TCO for the latter down, but I fail to see the point in denying a 100 watts difference for the same performance levels (as in GTX 580 and HD 7950 - for a lack of better measurement of how much increased Fps are worth to anyone) is something to sneeze at.
 
Most of north america is still paying 10-15 cents per kWh for electricity. I pay 10.61 cents (Canadian, but close enough to the same as US atm) although my flat fee is much higher ($19.28) that wouldn't really change with lower power use.

I expect most of the people who really care about power, would also care about dropping $500+ on a graphics card. I guess there's the bitcoin miners.

I think the most important advantages a lower power threshold gives AMD will come with their future products. The 7990 could be quite impressive and it leaves them room for a 'Review Edition' some highly binned product (or a respin even) approaching a 300W TDP that sets a high bar to put a crimp in keplers launch. If they were already approaching that mark they might well have taken the wind out of current campaign that the 79xx products aren't good enough (although many of the same people would be saying they weren't for whatever different reason to suit that agenda), but they would have also spent their load.
 
580 came earlier, so anyone who bought it would have to downgrade to 6970.

Im also not sure where Jawed is going with that comparison. Not only did the 580 launch first but it's also on the same manufacturing process as the 6970.

Expectations are very different when migrating to a new node. Is this a new revelation?

I fail to see the point in denying a 100 watts difference for the same performance levels (as in GTX 580 and HD 7950 - for a lack of better measurement of how much increased Fps are worth to anyone) is something to sneeze at.

I'm not denying that at all. The point we're debating is whether people will be happy with a $500 GTX 580 successor that's only 30% faster but with significantly reduced power consumption. I'm saying that people would scoff at that and be far happier with 580 level power and noise levels in exchange for higher performance.

The 580 is a pretty quiet card already. Why sacrifice performance to save a buck on electricity especially if they're trying to sell it for $500? Performance drives perceived value in the GPU market, not power consumption.
 
You're still holding onto this notion that whoever launches second has failed - amusing considering there are only two companies. There are just as many advantages to "losing" especially if you have a compelling product, HD4870 ring a bell?
No I'm not. Either IHV can launch second, but it doesnt have to be months and months after. Case in point is RV770 launched later than GT200 but it wasnt months late. I'm just contesting this delusion that "late" is a perception that should be only measured by a yardstick that isnt available - IHV's internal timeline.

There is no long-term first mover advantage by launching a few months early in a mature market and picking up early adopter sales, especially at Tahiti prices. I don't get why people get so worked up over such short periods of time. Can you name another market where launching a new product 3 months after the competition is a failure and giving them a free pass?
Pretty sure even Nvidia would disagree with you here. They would love the extra sales and the mindshare win over AMD about being "first-to-market". And 3 months is a significant time period (not as bad as Fermi) because a cycle is what 1-1.5 years which translates to 16-25% of the cycle.

Yes, GF100 was late and nVidia was fine. Hence proving my point. AMD may not have provided a public date but it's pretty clear they missed their own targets simply due to the problems they had (see the same Anand article you quoted).
I'm not sure how that proved any of your point(s). Nvidia was fine because of other factors - brand reach, loyalty, performance, features etc. I'm not sure why you are not even sticking to your own stance, GF100 was never late because Nvidia never publicly announced any timeline for it. :LOL:

The only thing moving is your interpretation of what I'm saying :) If I were to follow your logic then a hypothetical March launch for Tahiti instead of January would mean Kepler is only one month late instead of three. Or even better, if both companies screw up then nobody is late and the customer doesn't mind one bit. Sorry, but that doesn't click.
I'm perfectly clear on my stance and opinion. I'm using the same yardstick to measure both IHVs. What you are suggesting is to use a yardstick that isnt available and yet some how you arrive at conclusions based on ... I guess your gut feeling on when a particular ASIC was late.

In Trinibwoy's view clearly G80's lead over R600 mattered, but Cypress's lead over Fermi didn't matter.

There that clears it up.
Thanks.
 
I'm not denying that at all. The point we're debating is whether people will be happy with a $500 GTX 580 successor that's only 30% faster but with significantly reduced power consumption. I'm saying that people would scoff at that and be far happier with 580 level power and noise levels in exchange for higher performance.

Maybe there's a point where we can agree upon. I do not see 7970 as successor for GTX 580 in the sense that people already using a relatively current high end card need to upgrade, but an alternativ for people considering upgrading their graphics performance.

Here, 7970 is the clearly superior alternativ now, just as 580 was clearly superior to 5870 at the time of it's launch. And that's exactly my point for applauding AMD having taken the risk and launched first (call it early if you will): They are the #1 choice right now, expect if you might have very special requirements (which I cannot come up with other than probably developing on Cuda exclusively).

What I am not saying is, that this situation won't change once Kepler sees the light of day.
 
Well, if Kepler is indeed 512bit with 5.5Gbps memory, performance of 1.8x Fermi would be fairly reasonable to expect. That would make it ~1.3x Tahiti in bandwidth limited scenarios. I hope it doesn't cost 1.3x.... yikes!
 
Well, if Kepler is indeed 512bit with 5.5Gbps memory, performance of 1.8x Fermi would be fairly reasonable to expect. That would make it ~1.3x Tahiti in bandwidth limited scenarios. I hope it doesn't cost 1.3x.... yikes!

If they launch at $649 as suggested by lenzfire it's approaching that. A price cut on the 7970 to $499 would make it true. :)
 
Jawed said:
Really? Consumers care about the node?
Aren't you expectations different?

Mine are, though not as a customer but as an interested observer. For Trini and a bunch of other enthusiasts, it's probably both.
 
Bar charts are great but I certainly don't sit at my computer pondering the amount of electricity it uses. What I do notice is performance, IQ and noise. The impact to my electric bill is so negligible it's not even worth a thought. I always wonder if the people who are overly conscious about 40-50w of GPU load power consumption are equally sensitive about all the other electrical devices in their homes.

Yup some of us sure do. Everything from my CPU, to refrigerator, to air conditioner, to TV, to home stereo, to lighting, etc...

It all adds up and in the end I end up saving hundreds of watts which at the end of the year ends up saving me a few hundred dollars. Per month it may only save me about 25-40 USD a month depeding on useage. But over 12 months that's 300-480 USD. Or about enough to buy a new graphics card, or maybe CPU/MB/Memory, or whatever else I decide to spend those savings on if I just don't outright save it.

And I live in a state with cheap electricity. Although the price of electricity has risen sharply in the last few years as we sell much of our hydro power to California now meaning supply is far short of demand. So those savings might actually be more now. The last time I calculated roughly how much I was saving was 2 years ago.

I don't replace things everytime something more energy efficient comes out (lose far too much money doing that), but when buying something it's always at the top of the things I consider.

Regards,
SB
 
Carsten has nailed it - really what is the press and users supposed to do? Wait on Kepler expecting it to blow Tahiti away? Just like the waited months on Fermi...for Fermi?

You have to accept the cards as they are now. Kepler could be awful, it could be later. Should AMD be punished now for having what could be the fastest card of the generation? I'm not saying I believe it will be but right now it is, and it has to be treated as such until Nvidia prove otherwise...
 
Back
Top