NVIDIA Kepler speculation thread

Yes, and they were priced prohibitively. As are the 4GB GTX670/680 Ti or the 3GB GTX 660Ti.
Even today, the 4GB GTX760 is more expensive than a 2GB GTX 770 (full GK104). They charge something like a 80€ premium over some 15€ more of GDDR5.
what? your opinion on price has nothing to do with it or any part of this thread. For a anecdotal evidence I dont know anybody who bought a 1.5GB or, less than a 3GB GTX580. Most bought two, and some bought three. And untill Titan, they where the best consumer Cuda/Model creation cards to buy.
 
what? your opinion on price has nothing to do with it or any part of this thread. For a anecdotal evidence I dont know anybody who bought a 1.5GB or, less than a 3GB GTX580. Most bought two, and some bought three. And untill Titan, they where the best consumer Cuda/Model creation cards to buy.

Ebay search for 3GB GTX580 under graphics cards: 13 results
, starting at 250€.
Ebay search for GTX580: 98, starting at 150€.

So yes, your evidence is anecdotal.

It's not opinion on price, it's a fact that nVidia cards with larger memory pools are priced to dissuade people to go for the longer lasting option, in terms of memory amount becoming a bottleneck.
AMD cards are made the opposite way.

And please don't even bother to pull the bias card on me. My laptop has a GTX650M, my desktop has a GTX670 2GB and my HTPC has a GTX660 Ti.
 
Mostly unrelated, but one of my pet peeves:
1280 MB is 1.25 GB, not 1.28 GB.
Actually, neither. The amount of GDDR5-Memory usually found on GTX 570 cards is roughly 1342177280 Bytes. That's ~1342.2 MB because Mega means million or a thousand thousands. ;) Or more easily 1.25 GiB, for the number is base two, as indicated by the "i". Sorry for nitpicking on your nitpicking. ;)
See also: http://physics.nist.gov/cuu/Units/binary.html
 
Actually, neither. The amount of GDDR5-Memory usually found on GTX 570 cards is roughly 1342177280 Bytes. That's ~1342.2 MB because Mega means million or a thousand thousands. ;) Or more easily 1.25 GiB, for the number is base two, as indicated by the "i". Sorry for nitpicking on your nitpicking. ;)
See also: http://physics.nist.gov/cuu/Units/binary.html


G means 10^9 or 2^30 depending on context. No one I know uses Gi Mi or Ki in a professional context (at least here in the USA). I think only Germans use Gibibytes. ;)

There is, however, no consistent convention that allows 1280 MB on a GTX 570 to be 1.28 GB.
 

Ebay search for 3GB GTX580 under graphics cards: 13 results
, starting at 250€.
Ebay search for GTX580: 98, starting at 150€.

So yes, your evidence is anecdotal.

It's not opinion on price, it's a fact that nVidia cards with larger memory pools are priced to dissuade people to go for the longer lasting option, in terms of memory amount becoming a bottleneck.
AMD cards are made the opposite way.

And please don't even bother to pull the bias card on me. My laptop has a GTX650M, my desktop has a GTX670 2GB and my HTPC has a GTX660 Ti.

A GTX 580 doesn't play 2012 or 2013 titles at full detailes in 1080p, so if you have to go lower with details/AA, the required memory amount is lowered too. I don't think the smaller memory impacts longevity unless you want to play at < 30 fps or have very specific games/requirements. For the vast majority of the market, the memory amount on Nvidia cards is perfectly sufficient. If you're serious about gaming, then you should upgrade every 2-3 years anyway, at least in my opinion. And then the memory argument is quite moot since you get a new card with more.
 
You must be kidding us...

With new generations coming ever more rarely and more rarely, you should be upgrading every new generation (~ 2 years), which I cannot agree with too

I don't kid. In many newer titles in 1080p with 4xMSAA the 580 manages only as low as 30 fps avg (or even lower!) which I would consider not comfortably playable.

2-3 years is what I said. So you agree?
 
Modern titles are pigs with MSAA, deferred renderers built for predictable performance on consoles, that is unfortunate but well known.
It's not like the 9700 pro or 6800GT days where you got your 4x almost free. Alternatively, or at the same time, some of the crap between "high" and "extreme" halves your framerate too so maybe you should cut some of the useless diminishing returns stuff there instead.

Most gamers have something less powerful or equal than a 580 anyway, whether the same age, older or current gen. Probably 2-3 years is nice for you, that used to be yearly upgrades ; for other people it's becoming more than valid to get high memory capacity simply to run the future games and hold the more numerous assets, higher res textures and shadowmaps etc.
 
I don't kid. In many newer titles in 1080p with 4xMSAA the 580 manages only as low as 30 fps avg (or even lower!) which I would consider not comfortably playable.

2-3 years is what I said. So you agree?

No, I do not agree with you- and 4x MSAA can be compromised. It is not the most important setting which you need to mention or touch

You can list these titles (from 2012 and 2013) which do not run well maxed out on GTX 580
 
GTX580 is still a good performer.

I agree that NVIDIA too often cheaps out on VRAM. Since they have no problem with odd VRAM amounts, I'd say the GTX770 should be a 3GB part, the 780 4GB, and the 780Ti should have 6GB. That would at least bring them in line with AMD's offerings, and the 780Ti is a $700 FFS.

3GB of GDDR5 is one of the main reasons I went with a 7950 over a 660Ti. That and the amazing price I got on it :D
 
And best case scenario for 660ti is to match the 670 :), an almost high end, current gen part.
Yeah , especially if you own an overclocked one, I have the EVGA 660Ti SC and it's performance is the 670 in all but some rare cases.

You can list these titles (from 2012 and 2013) which do not run well maxed out on GTX 580
Metro 2033
Arma 3
Crysis 3
Battlefield 4
Assassin's Creed 4
Tomb Raider
Batman Arkham City
Call Of Duty Ghosts
Hitman Absolution
Shadow Warrior
XCom the Bureau


You can't run these titles maxed out @1080p with all the deferred MSAA, TXAA, PhysX, Fur PhysX , TressFX, DoF, Global Illumination .. etc, smotthly at all, you will have to cut back on image settings or sacrifice resolution. heck if you want 1080p@60fps.. you can't even do that on a Titan level GPU or even above.

Modern titles are pigs with MSAA,
....
some of the crap between "high" and "extreme" halves your framerate too
Exactly

for other people it's becoming more than valid to get high memory capacity simply to run the future games and hold the more numerous assets, higher res textures and shadowmaps etc
BF4 can tap into more than 2 GB on large maps with MSAA, Arma 3 reaches beyond 2GBs with FSAA and max draw distance. COD Ghosts consistently does so, even consuming the whole 3GB of my video memory!
 
No, I do not agree with you- and 4x MSAA can be compromised. It is not the most important setting which you need to mention or touch

You can list these titles (from 2012 and 2013) which do not run well maxed out on GTX 580

See DavidGrahams post, excellent examples. Don't be fooled by benchmarks in reviews: most show only avg fps which can be very misleading.
And although MSAA can be turned down or off, that is exactly the thing I did not mean when I said "maxed out". Proper antialiasing is an integral part of the gaming experience for me and not an "extra".

And with "do you agree?" I meant regarding the upgrade cycle. You didn't say anything different from what I said there.
 
What I am saying is that with a card like GTX 580 you can enjoy a smooth framerate with high quality (even with some settings lowered) with no regret that you miss settings which actually have little to no effect on improving the image quality.

And yes, I am saying that you can enjoy it for 4-5 years, not as you say 2-3 years.

Games from 2012 which push GTX 580 to its limits are so low percentage of the total amount game titles present, that it is not even worth mentioning them.
 
A GTX 580 doesn't play 2012 or 2013 titles at full detailes in 1080p, so if you have to go lower with details/AA, the required memory amount is lowered too. I don't think the smaller memory impacts longevity unless you want to play at < 30 fps or have very specific games/requirements. For the vast majority of the market, the memory amount on Nvidia cards is perfectly sufficient. If you're serious about gaming, then you should upgrade every 2-3 years anyway, at least in my opinion. And then the memory argument is quite moot since you get a new card with more.


Stating that the GTX 580 can't play everything from 2012/2013 maxed out therefore it doesn't need more than 1.5GB of memory is an awfully generic and ultimately wrong assumption.

First of all, the importance of a card's longevity doesn't depend only on being serious about gaming. It also depends on either the person can or cannot afford it. It'll also have influence on the card's second-hand market price, which is yet another factor that determines if/when a person can upgrade.

Secondly, configuring the graphics IQ settings in a game isn't a binary "can play/can't play" option.
It's perfectly possible to have the GTX580 presenting spectacular graphics at >60FPS in recent games, being only bottlenecked by the memory amount.
One such example is Skyrim: it'll play completely maxed out in any GPU with a performance comparable to the GTX580. And here the GTX 580's low amount of memory ends up limiting the high-resolution texture mods that can be applied. A slower 2GB Geforce GTX 650 Ti will be able to do more than a GTX580 in this situation.

Third, the new consoles have iGPUs with performance characteristics very close to the GTX580 and yet those will have access to over 6GB of memory for graphics. Even if the lack of low-level optimizations stop the GTX580 from ever performing as well as the consoles, the lack of memory will be the only thing that will eventually stop it from playing most games of this generation with reduced settings.
 
Back
Top