NVIDIA Kepler speculation thread

Why all the off-topic posts about the GTX 580 which is not even a Kepler?

The origin of the discussion is how the GTX780 Ti has a great GPU that will be short-lived by the low amount of memory set by nVidia's default design/specs, just as what happened with the GTX580.
 
Stating that the GTX 580 can't play everything from 2012/2013 maxed out therefore it doesn't need more than 1.5GB of memory is an awfully generic and ultimately wrong assumption.

First of all, the importance of a card's longevity doesn't depend only on being serious about gaming. It also depends on either the person can or cannot afford it. It'll also have influence on the card's second-hand market price, which is yet another factor that determines if/when a person can upgrade.

Secondly, configuring the graphics IQ settings in a game isn't a binary "can play/can't play" option.
It's perfectly possible to have the GTX580 presenting spectacular graphics at >60FPS in recent games, being only bottlenecked by the memory amount.
One such example is Skyrim: it'll play completely maxed out in any GPU with a performance comparable to the GTX580. And here the GTX 580's low amount of memory ends up limiting the high-resolution texture mods that can be applied. A slower 2GB Geforce GTX 650 Ti will be able to do more than a GTX580 in this situation.

Third, the new consoles have iGPUs with performance characteristics very close to the GTX580 and yet those will have access to over 6GB of memory for graphics. Even if the lack of low-level optimizations stop the GTX580 from ever performing as well as the consoles, the lack of memory will be the only thing that will eventually stop it from playing most games of this generation with reduced settings.

First of all, I disagree - it is. People spend money on all kinds of crap, be it the newest smartphone, new fancy clothes they don't need, drive their cars where they could walk/cycle, smoke etc. $200 every 2-3 years for a midrange video card of the newest generation is perfectly affordable imo. It's about lifestyle preferences, nothing else. I had two 580s 3GB, but only because of SLI. For that it makes sense. Otherwise, it's quite questionable. It's like the frequency game. People always think more is better, they need more. And just because a game uses more on a card with larger VRAM doesn't mean a card with less would fail in that scenario. VRAM usage != VRAM requirement! People so easily forget that.

Secondly, a generic statement is better than cherry picking (Skyrim with Mods). I would expect that most games play just fine on a 580 IF you lower settings accordingly - despite the 1.5 GB. For instance 2xMSAA instead of 4xMSAA not only gives more performance in general but also requires less VRAM. These things often go hand in hand. A GT650 Ti 2 GB is a joke compared to the 580. In 99% of all cases it doesn't have the raw compute power to fully utilize the additional 512 MB.

Third, you cannot compare console hardware to PC hardware. Closer to the metal programming models extract much more performance from the consoles. And as DavidGraham correctly said, it's about 5 GB in total (VRAM+RAM unified) for a game on PS4 or XB1.
 
Last edited by a moderator:
Not even close.

Using a 3-word sentence won't make your post right. It'll only make it trollish, which brings me to:

Only less than 5.5GBs are available for PS4 games, same for XO. the rest is dedicated for the OS. so the GPU in these consoles could use maybe 3GB for it's video data.

Nitpicking for a difference of ~8% between what I said and what you said.
Again, trollish.

The GTX580 is comparable to the xbone's and ps4's iGPUs in compute performance, geometry processing, fillrate, memory bandwidth and featureset. Any theoretical comparison that you ever bother to read will tell you the same.
Besides, what exactly was the point in your post?



First of all, I disagree - it is. People spend money on all kinds of crap, be it the newest smartphone, new fancy clothes they don't need, drive their cars where they could walk/cycle, smoke etc. $200 every 2-3 years for a midrange video card of the newest generation is perfectly affordable imo. It's about lifestyle preferences, nothing else.

This is the most conceited and self-absorved opinion I've ever seen about being able to make a purchase.
If I can afford it, then all the others must be able to afford it to. Otherwise, they must just be wasting too much money on cigarettes and stuff.

This conversation ends here for me. I choose not to maintain any kind of argument with someone carrying this attitude.



I wonder if they could have built 2GB 580s using mixed channel width like 550 Ti.

Since nVidia introduced that ability with GF114 which came out half a year after the GTX 580, it may have not been prepared for that.
But they sure could have done it for any Kepler chip. At least the GK104 and GK106 have that ability (660 Ti and 650 Ti Boost).
 
Last edited by a moderator:
The GTX580 is comparable to the xbone's and ps4's iGPUs in compute performance, geometry processing, fillrate, memory bandwidth and featureset. Any theoretical comparison that you ever bother to read will tell you the same.
Comparing different architectures will only get you so far. PS4's GPU is slightly better than an HD 7850, nothing more .. nothing less. The XO is out of the equation because it's at the same level of an HD 7770, so far behind. If you mean that they could get close to a 580 by close to metal programming , then that is yet to be tested and proven. And the situation is not that rosy in the console space due to their lackluster CPU which will hold them back.

Nitpicking for a difference of ~8% between what I said and what you said.
Not nitpicking, just pointing out the falsehood of the supposed advantage of consoles in memory capacity. there are restrictions all over, and while they do have some advantage in the flexibility of memory allocation, it is nothing that current PCs can't handle, and certainly nothing to brag about.

In fact if you look at some current cross-platform titles, such as Call Of Duty Ghosts you would see that it's texture resolution is lower than the PC version at max quality, BF4 and NFS Rivals also suffer from reduced world details (though that could be attributed to reducing CPU load too) .. suggesting even more deep restrictions on memory allocation.
 
Hey guys, it's against the forum rules to accuse other users of trolling. That stuff must be reported if serious.

Anywho I'm way more concerned about the 2GB cards than the 3GB GTX780. 3GB should be good well into the future, but 2GB may be limiting on the higher end GK104 cards. By the time that I'll probably be ready to upgrade from my GTX670 anyway, but it would be a problem if I ever wanted to resell it. (not that I would, all my old GPUs get passed down the "chain of command")
 
So we all know and understand that at one point Nvidia will release a Dual-GPU GeForce GTX 790 right ? Whenever that happens, I do not know for sure, maybe March e.g. CeBIT time though. It is very likely that product will get two 780 Ti GPUs e.g. full GK110 cores with 4992 Cuda Cores, 416 TMUs and 80 ROPs. It would be a beast of a card of course. Titan however now is rumored to get an update as well, the GTX Titan Black Edition.
Basically the GTX Titan Black Edition would become a new GK110 based card featuring the full 2880 Cuda cores, 240 TMUs and 48 ROPswith 6 GB memory running along a 384-bit interface. Clock frequencies are not unknown but it will get a 250W TDP and black colored Titan NVTTM cooler and the card would get full double precision enabled.

http://www.guru3d.com/news_story/rumor_geforce_gtx_titan_black_edition_and_790.html
 
I always wonder why journalists trot out these tired, old Cebit, CES, Computex, etc release date speculations?

Have Nvidia and AMD *ever* released a new desktop GPU at one of those shows? You'd think that they start seeing a pattern of being incorrect after a while, but no...
 
Evidence of a new (likely Kepler) low-end GPU.
The GTX 740 has only 1GB of GDDR5 with a 128-bit bus width.

With no more information, we can only speculate on details, especially which GPU is inside.


It could be a GTX 650 retread/rename using a GK107 or a cut-down GK106. Odd to use a two year old chip, though.

It could be a super-cut-down GM107, though the mere existence of a 6-pin power option makes this unlikely.

The most interesting possibility: it could be a new Kepler, a GK207 to complement GK208. Perhaps with the hinted upcoming sm_37 architecture? GK208 is sm_35, unlike the older GK10x series.
 
Evidence of a new (likely Kepler) low-end GPU.
The GTX 740 has only 1GB of GDDR5 with a 128-bit bus width.

With no more information, we can only speculate on details, especially which GPU is inside.


It could be a GTX 650 retread/rename using a GK107 or a cut-down GK106. Odd to use a two year old chip, though.

It could be a super-cut-down GM107, though the mere existence of a 6-pin power option makes this unlikely.

The most interesting possibility: it could be a new Kepler, a GK207 to complement GK208. Perhaps with the hinted upcoming sm_37 architecture? GK208 is sm_35, unlike the older GK10x series.

Agreed..could well be a GTX650 rebranding. But also be a cut down GM107..the 6 pin power plug appears to be only on one of the cards indicating that it is not a requirement.

GK107 would be cheaper as it has a lower die size..but if they have harvested GM107 chips..then it would be better to use them I suppose. Looks like launch is imminent so we should find out shortly.
 
It could be a GTX 650 retread/rename using a GK107 or a cut-down GK106. Odd to use a two year old chip, though.

It could be a super-cut-down GM107, though the mere existence of a 6-pin power option makes this unlikely.
Agreed..could well be a GTX650 rebranding. But also be a cut down GM107..the 6 pin power plug appears to be only on one of the cards indicating that it is not a requirement.
This link shows the GT 740 GPU to be a GK117.

http://gpuboss.com/graphics-card/GeForce-GT-740
 
Last edited by a moderator:
NVIDIA hits new low, they launched 3 new GeForce GT 730 -models

GeForce GT 730 with DDR3 memory and 128-bit interface
GeForce GT 730 with DDR3 memory and 64-bit interface
GeForce GT 730 with GDDR5 memory and 64-bit interface

Doesn't sound too bad, does it?
Yeah, it doesn't - until you realize that the first of the 3 has only 96 CUDA cores while the other two have 384.
 
NVIDIA hits new low
Really? Seems rather hyperbolic
they launched 3 new GeForce GT 730 -models
And this tops the four* GT 630 variants in what way?
Doesn't sound too bad, does it?
No, not really given that the likely market segment user either wouldn't know a CUDA core from a VGA-out, or are simply looking for card for display out.

*
GT 630 (GF108/96 core/128-bit/DDR3)
GT 630 (GF108/96 core/128-bit/GDDR5)
GT 630 (GK107/192 core/128-bit/DDR3)
GT 630 (GK208/384 core/64-bit/DDR3)
 
Back
Top