And yet, it's closely derived from what Nvidia said about it:That's most probably not true. […]
--
4 pieces 64Mx32 à 1024MB / 128-bit
4 pieces 64Mx32 in x16 mode à 1024MB / 64-bit
--
And yet, it's closely derived from what Nvidia said about it:That's most probably not true. […]
To be clear, that's our best guess. It's not something NVIDIA will confirm and it's not something we have been able to experimentally prove. But of the schemes NVIDIA could use, it's among the easiest to implement and likely the best performing; NVIDIA can ensure buffers stay in the lower 1.5GB, and then put less important data (e.g. irregularly used textures) in the last 512MB where it would be the least impacted by the narrow bus.Anandtech states it uses the 1.5GB 192bit + 0.5GB 64bit approach, and the site is generally correct on architectural details.
http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/2
So what? With that description you can still interleave the memory space between the three 64bit controllers controllers like I described. That's the important thing if you want to derive the maximal usable bandwidth for blocked access. Stating it has 1 GB with 128bit access and 1 GB with 64bit is misleading and also misrepresents how it is organised.And yet, it's closely derived from what Nvidia said about it:
--
4 pieces 64Mx32 à 1024MB / 128-bit
4 pieces 64Mx32 in x16 mode à 1024MB / 64-bit
--
So what? With that description you can still interleave the memory space between the three 64bit controllers controllers like I described. That's the important thing if you want to derive the maximal usable bandwidth for blocked access. Stating it has 1 GB with 128bit access and 1 GB with 64bit is misleading and also misrepresents how it is organised.
I've read a couple of comments suggesting microstutter is particularly bad in games that pass the 1.5 GB mark.
where you read that?
Nvidia has been using this same memory configuration in some 1GB cards (GTX 550 Ti, GTX 460 V2), I haven't see anyone complaining of any unusual micro stuttering in games using more than 768MB.
http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-660-oem/specifications
nVidia announces GTX 660 (OEM), again GK104.
With all the delays etc, starts to look like GK106 has been canned.
http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-660-oem/specifications
nVidia announces GTX 660 (OEM), again GK104.
With all the delays etc, starts to look like GK106 has been canned.
http://vga.it168.com/a2012/0713/1372/000001372093.shtml (translated) from a month ago. GK107 with GDDR5 and probably ≥1000 MHz core and ≥4800 MHz memory.Is there any rumor whatsoever as to what the GTS 650 might be, and when it might be released?
http://vga.it168.com/a2012/0713/1372/000001372093.shtml (translated) from a month ago. GK107 with GDDR5 and probably ≥1000 MHz core and ≥4800 MHz memory.
If there will be a GK106 (or a similar chip around ~200 mm^2 and 768-960 CCs), then it might show up so "late" so that it'll last until Maxwell without any "GK116"-type refresh. Anyway, I can't say the absence of a GK106 is unexpected. There were some hints prior to or around the release of the first Kepler chips (can't seem to find them now) that GK106 and GK110 would come after the GK104 and GK107 parts.
That would leave so much room for AMD with Cape Verde and Pitcairn…
Not only compared to the competition but also to NV's own Kepler family variants. GK107 is way too humble IMHO to cover let's call it the middle part of the 660 SKUs.
NV re-using GK104 even for 660 OEM SKUs could either mean that GK106 is facing unexpected delays or they've simply an unhealthy amount of GK104 chips (binning yields).
You're more relying on ANET to deliver well optimized and threaded graphics engine code than you are nVidia to deliver a driver tailored for GW2.
-----------
My issue with my GTX280 is that even on max graphics I'm only running at 50% gpu usage with 15-25fps. Will this driver help?
-----------
Generally speaking, that side of the issue is more bound to ANET and not to nVidia.
That's blaming arenanet?
On the GPU% of usage. I see alot of people post on a many different game/hardware forums, with low (50% or under) usage on 6xx, and 5xx(not so much) series cards. Most with any 3xx.x driver. Using MSI after burner etc.. Is it just the way the app is reading the "cores"? or are the Drivers kinda borked? Any 6xx or 5xx card users here have similar issue?