at 500$ msrp...
....and 3 years ago...
at 500$ msrp...
I also found memory overclocking on our sample to be quite limited, worse than other GTX 750 Ti cards using the same memory chips. So, even though the 6-pin power connector suggests superior overclocking capability, it doesn't work that way. I've tested other GTX 750 Ti cards that are better suited if you plan to overclock. Don't get me wrong, overclocking is not terrible as it still provides a good 10% real life performance increase, but I was expecting more.
As long as they use the same timings, no. I don't think there's really any difference between Kepler, Maxwell or newer AMD cards there from a hardware point of view, the problem is always the same (can't reclock the memory if vblank period isn't synchronized - typically for dvi monitors this means they need to be driven from the same clock source, and nvidia definitely did that earlier, it's also possible the card bios needs to play along not just the driver I'm not really sure there).
They have to be perfectly in sync. Same resolution, same timings, same polarity. It's similar to NVIDIA's requirements for their Surround modes.Sry for being somewhat off topic, but anyone knows what's exactly needed to have the monitors in sync and the 290 run the low clocks? Since it appears to be working for some with different monitors, but not for my 2 (different) 60hz monitors... are some combinations of outputs (tried dvi+hdmi and dvi+dvi for now) better than others etc?
Oh you're right I missed that - so forget that I said there's no difference.ok, thanks.. so no chance when they are different resolutions...
But the nvidia cards still seem able to do the lowered memclock when running 2 monitors with different resolutions and timings (but not with 3) - http://www.techpowerup.com/reviews/ASUS/GTX_750_Ti_OC/23.html http://ht4u.net/reviews/2014/nvidia_geforce_gtx_750_ti_im_test/index17.php . Just wondering how they retrain the gddr5 then.
Irrespective of wether or not the Sandra numbers are correct, it is to be expected that CPUs have a much lower latency that GPUs: latency is absolutely critical for a CPU to prevent stalls. Not so for a GPU that usually has plenty of latency hiding work. It'd be a waste of resources to optimize a GPU for a low latency cache that isn't strictly needed.Comparing the tom's hardware latency graph with Haswell numbers here, it looks like NVidia's 24KB L1 latency is about 3x Haswell 6MB L3 latency, or comparable to Intel's 128MB off-chip L4 latency (for in-page random loads, which apparently means an access pattern that avoids TLB misses). And that's comparing just cycle counts; Haswell's cycle time is less than half that of the GM107.
I'm wondering if Sandra might be launching a bunch of warps/wavefronts that all do memory access and end up getting scheduled in round robin fashion, so that what Sandra really measures is something more like the size of the scheduler's queue of warps/wavefronts rather than actual cache latency.
Apparently the 750ti is a great card for bitcoin mining thanks to its performance per watt ratio. I guess the price of this card will skyrocket in the US.
Thankfully we don't have this problem in europe (at least in Spain) were prices of AMD cards have remained stable.
Kind of dissapointed we won't get mid-high range Maxwell card probably until next year.
Apparently the 750ti is a great card for bitcoin mining thanks to its performance per watt ratio. I guess the price of this card will skyrocket in the US.
Oh you're right I missed that - so forget that I said there's no difference.
My _guess_ would be that they simply have a large enough line buffer for scanout (for one monitor) so they don't need to wait for vblank interval. Though I think you'd need something in the 100kB range for that which sounds a bit expensive (of course this would be resolution and refresh rate dependent). Maybe they could exploit the L2 cache for this instead. I could be very wrong though .
2) The graph/benchmarks sucks.
They have just benchmarked the performance and divided it with the TDP. That gives very unreliable results.
You can immediately see this when they show that overclocked is giving better numbers. But overclocking always decreases power efficiency when you have to increase voltage.
Apparently the 750ti is a great card for bitcoin mining thanks to its performance per watt ratio. I guess the price of this card will skyrocket in the US.
Thankfully we don't have this problem in europe (at least in Spain) were prices of AMD cards have remained stable.
Kind of dissapointed we won't get mid-high range Maxwell card probably until next year.