NVIDIA card can be overclocked too , GTX 685 could also be made with higher clocks !
lol
NVIDIA card can be overclocked too , GTX 685 could also be made with higher clocks !
Lower fidelity/IQ would be somewhat OK for the side monitors except that oftentimes people can and do turn their heads to focus on something briefly that their peripheral vision caught, rather than turning their entire ingame viewpoint to it. But I think it could in certain games be a net benefit (racing games for instance).
Lower framerates would just be bad. Across the board bad. With absolutely zero redeeming features, IMO. Even if it's your peripheral vision, having it update once per 2 frames in main vision is going to be distracting.
Now if you move it to any type of strategy game (Civ 5 for instance) you'll immediately notice the effect on your mouse as it'll get less responsive the moment you move from primary monitor to the side monitors.
And then how are they going to deal with 3x2 setups? Or 5x1 setups?
If this truly is something Nvidia are putting in. Hopefully they are smart and making it an option that you opt in on, rather than the default.
In other words, the only good thing that could come of lower framerates on side monitors is to boost benchmark scores. It would be absolutely useless in actual gameplay.
Regards,
SB
You should be laughing at YOUR argument , no one takes overclocking as the basis of product competitiveness when it works both ways , that may hold true when one product is seriously being pushed to it's limits that you can't extract more juice out of it . but when both products are using the same technology , your argument is not even half valid .
This will be like the jello effect you get with shooting video on DSLRs - only worse, because your peripheral vision is more sensitive to frame-rate than your focus.Lower framerates would just be bad. Across the board bad. With absolutely zero redeeming features, IMO. Even if it's your peripheral vision, having it update once per 2 frames in main vision is going to be distracting.
To get maximum utilization, the processing of the pixels in a quad has to get out of sync. I really doubt this happening with dynamic scheduling right now.
When trying it statically, it has basically the disadvantages of a combined SIMD8-VLIW8 approach (you described it more as a dual issue VLIW4, but it is not going to work this way) and would try to conceal this by processing of a quad and not a pixel in each (VLIW)-vector lane. For some problems it might work okay, but generally it is not really compatible. Just imagine the hassle when a branch diverges within a quad. How do you compile for that?
And with dynamic scheduling (without VLIW) you would actually need an 8 way scheduler for it.You would need to split each 32 element warp in 4 quarter warps (each quarter warp has one pixel from one of the 8 quads in the warp) and schedule the quarter warps individually. And you don't get any performance improvement compared to my suggestion above, but you need far more complex scheduling hardware.
FXAA cures more aliasing than MSAA in BF3, MSAA misses most edges altogether on BF3. FXAA blurs the image though, of course.
yes Apple is at war against scripting languages except the slowest of all, javascript.they don't want flash because you can make "apps" with it (with access to camera, microphone)
NVIDIA card can be overclocked too , GTX 685 could also be made with higher clocks !
This is my "extrapolation" of the Fermi multiprocessor into a Kepler version. The scheduler is omitted.
Bashing is welcome!
Well, for the 5x1, the outermost monitors could update once per 4 frames….Even if it's your peripheral vision, having it update once per 2 frames in main vision is going to be distracting.
…
And then how are they going to deal with 3x2 setups? Or 5x1 setups?
Didn't this part of the discussion originate in conjunction with something called adaptive VSync? How could someone try to boost benchmark scores that are obtained Vsynced?
You should be laughing at YOUR argument , no one takes overclocking as the basis of product competitiveness when it works both ways , that may hold true when one product is seriously being pushed to it's limits that you can't extract more juice out of it . but when both products are using the same technology , your argument is not even half valid .
The same applies to driver optimizations . when both companies are using new architectures , you cant expect one to pull ahead of the other by driver enhancements , since it is going to be applied both ways too .
That is an 8 way scheduler (or dual 4 way) . My last paragraph with the "quarter warps" was describing exactly this.It would basically result in a scheduling with a granularity of just 8 work items.I'm pretty sure I don't follow what your saying. I was thinking of the scheduler dynamically constructing what is essentially a VLIW 8 instruction by considering up to 2 instructions from a single warp per cycle.
In that picture an SMX would have 8 ports, not just 3.This is my "extrapolation" of the Fermi multiprocessor into a Kepler version. The scheduler is omitted.
Bashing is welcome!
What would be the point in improving main-screen fps under VSync anyway? Unless it was below 60 fps of course.
What I wonder is how it could even be measured via fraps or whatever - is it taken as an average over the whole setup or just the middle screen? You can see how the latter would be a serious advantage to Nvidia in benchmarking (if they have control over the benchmark).
I cannot say, because I don't know what the source cited meant by it anyway. I just wanted to make a point that in conjunction with VSync it's highly unlikely that this some kind of multi-monitor benchmark fraud.
I wonder why they left the LDS/L1 combo size inchanged?