NVIDIA Kepler speculation thread

Anandtech states it uses the 1.5GB 192bit + 0.5GB 64bit approach, and the site is generally correct on architectural details.
http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/2
To be clear, that's our best guess. It's not something NVIDIA will confirm and it's not something we have been able to experimentally prove. But of the schemes NVIDIA could use, it's among the easiest to implement and likely the best performing; NVIDIA can ensure buffers stay in the lower 1.5GB, and then put less important data (e.g. irregularly used textures) in the last 512MB where it would be the least impacted by the narrow bus.
 
Last edited by a moderator:
And yet, it's closely derived from what Nvidia said about it:
--
4 pieces 64Mx32 à 1024MB / 128-bit
4 pieces 64Mx32 in x16 mode à 1024MB / 64-bit
--
So what? With that description you can still interleave the memory space between the three 64bit controllers controllers like I described. That's the important thing if you want to derive the maximal usable bandwidth for blocked access. Stating it has 1 GB with 128bit access and 1 GB with 64bit is misleading and also misrepresents how it is organised. :rolleyes:
 
So what? With that description you can still interleave the memory space between the three 64bit controllers controllers like I described. That's the important thing if you want to derive the maximal usable bandwidth for blocked access. Stating it has 1 GB with 128bit access and 1 GB with 64bit is misleading and also misrepresents how it is organised. :rolleyes:

So... nothing. *shrugs* Just wanted to provide a hint to where the confusion could originate - no need to kill the messenger.
 
I've read a couple of comments suggesting microstutter is particularly bad in games that pass the 1.5 GB mark.

where you read that?
Nvidia has been using this same memory configuration in some 1GB cards (GTX 550 Ti, GTX 460 V2), I haven't see anyone complaining of any unusual micro stuttering in games using more than 768MB.
 
Don't know about microstuttering, but looking at benches which report minimum FPS too, 660Ti often has abysmally bad min FPS compared to other cards with similar avg FPS
 
what is certain is 3GB is a bigger number than 2GB or 1.5GB, and there are gtx 660ti with 3GB :)
it could be a nice plan if you would buy it with the intent to use it for four years.
 
where you read that?
Nvidia has been using this same memory configuration in some 1GB cards (GTX 550 Ti, GTX 460 V2), I haven't see anyone complaining of any unusual micro stuttering in games using more than 768MB.

Nvidia buying guy on a GW2 forum bought two and was very disappointed with the performance.

http://www.guildwars2guru.com/topic/50075-first-660-ti-benchmarks-are-out/page__st__30#entry1749823

http://www.guildwars2guru.com/topic/50075-first-660-ti-benchmarks-are-out/page__st__30#entry1750433

Some unusual stuff there too, like flickering at idle and sudden jumps to max clocks on the desktop. Poor overclocking is one we heard about already.
 
http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-660-oem/specifications

nVidia announces GTX 660 (OEM), again GK104.
With all the delays etc, starts to look like GK106 has been canned.

Looks like you may be right and that Nvidia will handle the entire 600 line with just two parts. The GK104 and the GK107.

Also here is a link to an GT645 (OEM) part which seems to be a FERMI GPU with DDR5 memory:

http://www.geforce.com/hardware/desktop-gpus/geforce-gt-645-oem/specifications

If Nvidia decided to not make the GK106 then that would by itself be the reason for the GTX660 delay as the 28nm supply would really have to expand to allow for the much higher sales these mid-range cards would entail.

A couple questions:

#1 How much does Nvidia save in NRE costs in not producing the GK106

#2 Will the (GK104 or the GK107) handle the GTS650
 
Last edited by a moderator:
Is there any rumor whatsoever as to what the GTS 650 might be, and when it might be released?
http://vga.it168.com/a2012/0713/1372/000001372093.shtml (translated) from a month ago. GK107 with GDDR5 and probably ≥1000 MHz core and ≥4800 MHz memory.

If there will be a GK106 (or a similar chip around ~200 mm^2 and 768-960 CCs), then it might show up so "late" so that it'll last until Maxwell without any "GK116"-type refresh. Anyway, I can't say the absence of a GK106 is unexpected. There were some hints prior to or around the release of the first Kepler chips (can't seem to find them now) that GK106 and GK110 would come after the GK104 and GK107 parts.
 
http://vga.it168.com/a2012/0713/1372/000001372093.shtml (translated) from a month ago. GK107 with GDDR5 and probably ≥1000 MHz core and ≥4800 MHz memory.

If there will be a GK106 (or a similar chip around ~200 mm^2 and 768-960 CCs), then it might show up so "late" so that it'll last until Maxwell without any "GK116"-type refresh. Anyway, I can't say the absence of a GK106 is unexpected. There were some hints prior to or around the release of the first Kepler chips (can't seem to find them now) that GK106 and GK110 would come after the GK104 and GK107 parts.

That would leave so much room for AMD with Cape Verde and Pitcairn…
 
That would leave so much room for AMD with Cape Verde and Pitcairn…

Not only compared to the competition but also to NV's own Kepler family variants. GK107 is way too humble IMHO to cover let's call it the middle part of the 660 SKUs.

NV re-using GK104 even for 660 OEM SKUs could either mean that GK106 is facing unexpected delays or they've simply an unhealthy amount of GK104 chips (binning yields).
 
Not only compared to the competition but also to NV's own Kepler family variants. GK107 is way too humble IMHO to cover let's call it the middle part of the 660 SKUs.

NV re-using GK104 even for 660 OEM SKUs could either mean that GK106 is facing unexpected delays or they've simply an unhealthy amount of GK104 chips (binning yields).

There is large possibility Nvidia use the GK106 for GTX650TI - GTX650 .. upping their line up a notch for goes about competition ( and can accordly to clock speed get some margin on performance and tdp limitation with turbo boost ( tdp limit who is higher on non reference model anyway ( the 3 articles where we was seen 660TI reference was made with non reference cards flashed with reference bios or downclocked ( according to their writer ).

If GK106 was not enough fast for goes against 7870, this seems a logic move.. the 660 1152sp, is going against 7850... and the 650TI (GK106 ) is going to fill the gap between 7770 and 7850. 650 just over or equal to 7770 .

We know Nvidia will never let this segment to AMD. AMD can lower the price ( most cards are in shop since 8 and 6months ), and so use higher chips on lower price range for Nvidia is a good solution.
performance wise, image wise, and segment wise.

( How can perform a GK104 with 1152sp and 888mhz turbo boost vs the 980mhz/1033mhz seen on the 660TI 1344SP ? 15-17% less? )
 
Last edited by a moderator:
You're more relying on ANET to deliver well optimized and threaded graphics engine code than you are nVidia to deliver a driver tailored for GW2.

-----------
My issue with my GTX280 is that even on max graphics I'm only running at 50% gpu usage with 15-25fps. Will this driver help?
-----------

Generally speaking, that side of the issue is more bound to ANET and not to nVidia.




That's blaming arenanet?

Well arenanet appear to be disagreeing somewhat, noting large improvements with the most recent driver.

https://www.guildwars2.com/en/news/bill-freist-talks-optimization-and-performance/

Framerate-Comparison-590x289.jpg


Performance wise vs AMD they are still a long way behind however.

Average-Framerate-by-GPU-590x781.jpg
 
On the GPU% of usage. I see alot of people post on a many different game/hardware forums, with low (50% or under) usage on 6xx, and 5xx(not so much) series cards. Most with any 3xx.x driver. Using MSI after burner etc.. Is it just the way the app is reading the "cores"? or are the Drivers kinda borked? Any 6xx or 5xx card users here have similar issue?
 
On the GPU% of usage. I see alot of people post on a many different game/hardware forums, with low (50% or under) usage on 6xx, and 5xx(not so much) series cards. Most with any 3xx.x driver. Using MSI after burner etc.. Is it just the way the app is reading the "cores"? or are the Drivers kinda borked? Any 6xx or 5xx card users here have similar issue?


I have not follow the evolution of the problem, but it seems in some particular games ( not all ), many have got this type of problem ( BF3 etc ) .

Fixed on some driver, unfixed on the next etc, so i can imagine it can be linked to the drivers. ( just reading the report on new beta or official drivers launch, you get directly the: " is the low usage in the game x is fixed by this driver ? ( anyone can tell me ).. or " it dont fix the low usage on the game x, but it do it on the game y.."

Now ofc, this is without v-sync.. ( with v-sync on, this is normal. This is normal to get lower usage. )
 
Back
Top