I wouldn't want to convert my GTX 680 (if i owned one) to an iPad 2.
What do you think of this?
Sandra gpgpu bench against tahiti AND pitcain:
http://www.theinquirer.net/inquirer...hed-amds-mid-range-radeon-hd-7870-gpu-compute
Wonder if the clock is really 700 or 1000 here..
Apparently, this test is compute throughput limited. The scaling to the GF110 follows the FLOPs doubling.
What do you think of this?
Yes, MediaEspresso 6.5 uses CUDA for Nvidia and APP for AMD cards.680 seems to have more CUDA cores as Mediawhateverespresso uses CUDA, right?
It's definitely fast.
Might that be due to an encode engine...?
Neither do I.I'm not buying the "new architecture" bit just yet. So far it looks like an optimized GF114 Fermi on clock roids.
I bet the scheduling nonsense of old has been deleted.I'm not buying the "new architecture" bit just yet. So far it looks like an optimized GF114 Fermi on clock roids.
This is a non-argument. Same applies to AMD's stuff.@Jawed, maybe 28nm wide+slow simply has better overall perf/watt/mm characteristics than narrow+fast on previous nodes.
It's worth noting that the entire chip is now running at much higher clocks. If rumors pan out the regfile, scheduler, tmus, rops, geometry etc will be running at speeds approaching G80's hot-clock.
Neither do I.
Nvidia has always followed method similar to intels tick tock. (nv20 new architecture, nv25 speed upgrade.)
Fermi was new architecture, so Kepler is the upgrade part and Maxwell will be something else.
So all we had to do to calculate the performance of the GTX 680 was take the numbers from a GTX 580 and multiply by three? Well, don't we look foolish now. We could have saved ourselves over 3200 posts of discussion if we had only known.
Err... multiply by 3? Having 3x ALUs doesn't increase performance by 3x, especially when the Fermi ALUs are hotclocked and these aren't, not to mention other possible differences