NVIDIA Kepler speculation thread

Really? Consumers care about the node?

The ones who spend significant amounts of time discussing GPUs and closely following release cycles certainly do. What profile do you have in mind for early adopters of $500 graphics cards?

Maybe there's a point where we can agree upon. I do not see 7970 as successor for GTX 580 in the sense that people already using a relatively current high end card need to upgrade, but an alternativ for people considering upgrading their graphics performance.

mapel suggested that if nVidia's 580 successor launches at $500 at 7970 performance levels it would be a joke. You made the counter argument based on power consumption. We're not talking about the 7970 here, we're talking about a hypothetical $500 nVidia GPU with 7970 performance.

With respect to the 7970 itself we've been through that already :)
 
Yep, mapel said that if Nvidia would have brought a 580 successor with 7970 performance people would call it a joke.

I said (after being corrected on context), that if they had done that with the same improved power consumption, nobody would. You said, you would have done anyway, so my assumption that nobody would is wrong - proven by you!

Nevertheless, I stand by what I've said. I for one would most certainly not have called it a joke and there's plenty of other points to this argument.

7970 is the 6970s successor and it is on some of the reviews it's a little more than 30% percent - especially in the resolution that matters most to me. Then there's GPU Compute where 7970 simply walks all over 6970. Plus it adds quite a bit on the image quality front. Plus it's got more quiet cooling (reference designs compared). And all of this in the same ballpark of energy consumption.

Oh - and did I mention that we're talking about an actual product, not some hypothesis. :)

AMD has executed very well here and they did swallow the bitter pill of investing in GPU compute this round, which does not immediately net you returns on the gaming front - something Nvidia has done since GT200 and paid the price in the form of large die sizes. When taking this into account, you really cannot compare AMDs to Nvidias situation. And I will gladly applaud Nvidia as well, when they up the image quality likewise, add an eyefinity-like technique to Kepler, stay within the same power envelope (<225 watts that is) and beat GTX 580 on all fronts with 30 - 40 percent margin. At the very least such a product will drive down the prices of 7970 AMD is able to command for one reason only: because they were first.

So, you can nay-say all you want, 7970 is a very well rounded product. And if Nvidia is going to beat it by 30% (as they now have to in order to have a comparable situation), then I am going to be fine with that as well.
 
7970 is the 6970s successor

It certainly is. It's also a claim that nVidia doesn't have the luxury of making. Their offering will be judged against the 580.

AMD has executed very well here...

I haven't seen anyone question AMD's execution and I certainly haven't. The only things called into question so far have been pricing and the mysteriously conservative clocks.

Call me optimistic but I sure hope nVidia can muster 50% more performance than GF110. I'm currently stuck between titles that are a cakewalk for the 580 and others that it can't handle at the settings I want (metro, bf3).
 
SB that doesn't change his point. It may all add up, but individually the graphics card makes very little difference. I certainly replace things with more efficient options, but I don't bother with the GPU.
 
SB that doesn't change his point. It may all add up, but individually the graphics card makes very little difference. I certainly replace things with more efficient options, but I don't bother with the GPU.

Sure, and the 30 watts I save on my CPU isn't much. And the 10 watts I save on each light bulb aren't much. And the 40 or 50 watts I save on the refrigerator might not be much.

If I ignore all of those and discount them as "not much" then at the end of the year I save 0 dollars.

Power savings converted to monetary savings has NEVER been about 1 large lump of electrical savings from 1 item. It's about the combined savings from multiple items.

Just from the computer alone I save over 100 watts in power due to choice of components without sacrificing much performance. A few watts from MB choice. A little bit from low voltage memory (lucky to get it on sale). A big chunk from video card. Another large chunk from CPU. A few watts from HDDs. And then a percentage of all that from the PSU. Passive cooling when able (no power used by fans), etc.

If you start discarding power savings from any single item you've already defeated the purpose of trying to reduce consumption and increase monetary savings (I'm doing this for the money and not the environment. :p).

The same principles apply to groceries, and just about anything else in the world.

Regards,
SB
 
How much it adds up to really depends on the usage and depending on the user, power can make a significant difference. If you're running folding or bitcoin mining 24/7 saving 100W (this would be more 7950 vs 580) on your video card if you could be saving over $100 a year (double that if you use CarstenS power costs, which could mean the card paying for itself in 2 years). The average usage case isn't going to see that scenario, but that doesn't make power use irrelevant in an era where people do tend to make more green choices. It's not at the top of my list, but I might give up 5% performance to save a few trees.
 
How much it adds up to really depends on the usage and depending on the user, power can make a significant difference. If you're running folding or bitcoin mining 24/7 saving 100W (this would be more 7950 vs 580) on your video card if you could be saving over $100 a year (double that if you use CarstenS power costs, which could mean the card paying for itself in 2 years). The average usage case isn't going to see that scenario, but that doesn't make power use irrelevant in an era where people do tend to make more green choices. It's not at the top of my list, but I might give up 5% performance to save a few trees.
No one use trees to power a power plant= =
But I kind of agree with you. For me performance pre watt is what I think about first.
 
Call me optimistic but I sure hope nVidia can muster 50% more performance than GF110. I'm currently stuck between titles that are a cakewalk for the 580 and others that it can't handle at the settings I want (metro, bf3).
If you mean GK104:
In their reviewer's guide? No doubt about it.
In reviews using mostly integrated benchmarks of game applications? Possible.
On average in real-world in-game scenarios? Doubtful at best.

Nvidia though probably has the benefit of having carried around the compute burden for a long time already, not requiring a massive increase in transistors to make Kepler competitive on this front. But then, that held true for their large high-end GPUs, which GK104 is rumored explicitly not to be.
 
About this PhysX thing:
Charlie seems to think that Kepler could boost performance in software physx (physx via the CPU) titles. At first I was skeptical but the PhysX software and the driver are in Nvidias hands, so why not redirect API calls and make the GPU do the work? Nvidia has complete control over the ecosystem. Something similar has happened before with T&L back in the day iirc, just under DirectX.
 
About this PhysX thing:
Charlie seems to think that Kepler could boost performance in software physx (physx via the CPU) titles. At first I was skeptical but the PhysX software and the driver are in Nvidias hands, so why not redirect API calls and make the GPU do the work? Nvidia has complete control over the ecosystem. Something similar has happened before with T&L back in the day iirc, just under DirectX.

yep redirect code that was written for low thread count low latency processor to a high threaded high latency processor.............. what could possible go wrong/ how hard could it be //Clarkson :LOL:
 
Also, most if not all games that use software PhysX are GPU bottlenecked. I don't get how boosting this part would benefit performance at all.
Oh well...maybe OBR is right and Charlie has no clue at all.
 
Also, most if not all games that use software PhysX are GPU bottlenecked. I don't get how boosting this part would benefit performance at all.
Oh well...maybe OBR is right and Charlie has no clue at all.

personally i think AMD pissed charlie off somehow, so he had a little cry and carry on and is now caught between a rock and a hard place. Fanboys on both sides kind of sitting around going WTF Mate?!?!?
 
About this PhysX thing:
Charlie seems to think that Kepler could boost performance in software physx (physx via the CPU) titles. At first I was skeptical but the PhysX software and the driver are in Nvidias hands, so why not redirect API calls and make the GPU do the work? Nvidia has complete control over the ecosystem. Something similar has happened before with T&L back in the day iirc, just under DirectX.

Modern cpus are fine for running the non hardware accelerated part of physx. Why would Nvidia go in such lengths to get their gpus occupied with this? I am no coder, but I am not sure if games would run faster anyway. I always get the impression that PhysX brings a performance hit even you want to type your name on the screen.
 
Think sharp - there would be other ways if you own an ecosystem. And a few weeks ago, they provided the perfect excuse.
 
So according to the rumour mill, enabling PhysX will astronomically boost performance. Wouldn't one expect the performance to take a hit due to the added content and not the other way around?
 
Think sharp - there would be other ways if you own an ecosystem. And a few weeks ago, they provided the perfect excuse.

They can

1. cheat by artificially slowing down CPU code (even more than they do now. x87 is not optimal)
2. patch/have devs patch their games to use GPU-PhysX - highly unlikely

A few weeks ago didn't Nvidia "open" CUDA?
 
Right, they did - or PR'ed to do so, depending on your point of view.

#1 is actually, what they're accused of for a long time. Now imagine the possibilities, a new Physx version would open, fully optimized for modern multicore-CPUs (apart from distributing multiple particle emitters over a number of cores) and their advanced instruction sets. Wouldn't that make a system shine, which satisfies certain hw requirements in order for the new Physx stuff to work it's magic?
 
So activate all these optimizations if a Kepler/Nvidia-GPU is installed and don't if the driver finds an AMD-GPU. That would be...bad.

Would the games have to actually use PhysX 3 or can you just patch in realtime while the code is running and it magically works for all games, regardless of what PhysX SDK the devs used back then?
Aren't games almost always GPU bottlenecked? What good would it do to accelerate code that up to now runs on a component which is not performance critical (CPU)?
 
Last edited by a moderator:
boxleitnerb said:
About this PhysX thing:
Charlie seems to think that Kepler could boost performance in software physx (physx via the CPU) titles. At first I was skeptical but the PhysX software and the driver are in Nvidias hands, so why not redirect API calls and make the GPU do the work?
SW PhysX is DLL that comes with the game. It's not a driver. AFAIK, a SW only PhysX game will never pass though the driver. Check this out: http://physxinfo.com/news/7066/inve...man-arkham-city-with-apex-dlls-from-mafia-ii/

Nvidia has complete control over the ecosystem. Something similar has happened before with T&L back in the day iirc, just under DirectX.
If DirectX back in the day already supported T&L in SW, then the driver API call was already there.
 
Back
Top