I'll be straight forward, it doesn't need that much power and its performance, drivers are raw and its performing well.......
You realize if you have real knowledge/data you're actually not permitted on this thread, right?
I'll be straight forward, it doesn't need that much power and its performance, drivers are raw and its performing well.......
I don't quite get it how some people really expect that kinda X2 card (or X2 card at all for that matter), when it's known fact that castrated Fermi GF100 (1/8th of the cores disabled) with what's expected to be lower than GeForce variant clocks is already at 225W TDPI call BS on the 300watt TDP for Fermi based GeForce single chip. That's what we are expecting for the X2 card.
Would it be 30-35% faster, do you really think they would still not show pseudo-official numbers, say, by showing the fps counter on Heaven?Nope. I'm still going for the "it's either 30-35% faster than the HD 5870 or it's fail".
! They'd have to clock and power each transistor the same, which is absurd. Until NV make a statement about power, nothing should be read into power where it concerns the cooling solution or connector choice.Fermi being ~ 50% bigger than Cypress should roughly translate to 50% higher tdp - 188w x 150% = 282w ... as a rough baseline and assuming they need to put all of the chip to work. That's physics 101.
That's pure comedy.Fermi 'straddling' GPU and GPGPU means a large chunk of that real estate is non-optimized for pure GPU tasks.
It simply means Nvidia doesn't have a magic wand to wave away the laws of physics.
Fermi being ~ 50% bigger than Cypress should roughly translate to 50% higher tdp - 188w x 150% = 282w ... as a rough baseline and assuming they need to put all of the chip to work. That's physics 101.
Fermi 'straddling' GPU and GPGPU means a large chunk of that real estate is non-optimized for pure GPU tasks.
Fermi competing with Cypress on cost performance means they have to wring the last bit of GPU specific performance they can from the chip ... maximum realizable clock speeds, drivers optimized to wring as much GPU performance from the GPGPU real estate as possible ... do they even have the option of doing otherwise, they absolutely HAVE to exceed the 5870 specs by some appreciable margin.
Add in it's a new and very complicated chip design on a new process and the time element, having to go with the earliest usable (non-optimized) silicon spin possible, and why wouldn't Fermi be pushing the 300 watt envelope? Or even being at 300 watts without the clock speeds maxed out.
Fermi being ~ 50% bigger than Cypress should roughly translate to 50% higher tdp - 188w x 150% = 282w ... as a rough baseline and assuming they need to put all of the chip to work. That's physics 101.
You realize if you have real knowledge/data you're actually not permitted on this thread, right?
Would it be 30-35% faster, do you really think they would still not show pseudo-official numbers, say, by showing the fps counter on Heaven?
There are only 2 reasons to not show it, and none would really be welcome...
1- VERY bad PR staff.
2- It is "slow", or at least doesn't give a substantial performance advantage.
Even with a moderately low framerate (5870 level) they still could argue the drivers are not ready and that's why they don't ship today, so the first reason is quite proven anyway.
! They'd have to clock and power each transistor the same, which is absurd. Until NV make a statement about power, nothing should be read into power where it concerns the cooling solution or connector choice.
Now, I wonder how a GeForce could consume 300W, when we know what a 6GB Tesla will need.
To show potential customers they will offer a faster GPU in the near future?Plus, if drivers aren't up to speed yet, why should they show real numbers now, when performance will be even better in March ?
GT200b has 45% more transistors and it's 85% bigger than rv770 but have only a 28% higher tdp.
Physics law is a bitch...
Now, I wonder how a GeForce could consume 300W, when we know what a 6GB Tesla will need.
http://www.madshrimps.be/vbulletin/f22/amd-radeon-hd-5870-hd-5850-performance-chart-66465/
So it uses 28% more power to achieve the same performance. And has no GPGPU pretensions.
What would it's power use be if it had 45% more performance than the 4870? How about if it added 50% more transistors that had nothing to do with game performance?
I'll ask you again, was PhysX being used? Nothing I can discern in the video or audio indicates as such. First you state it as fact, then you say presumably, then you say it's a fact, again
He does? I couldn't make that out. I hear him say "physics" a number of times, but never PhysX.The guy presenting the demo explicitly says that it is
He does? I couldn't make that out. I hear him say "physics" a number of times, but never PhysX.
Nvidia confirmed to PC Games Hardware that there will be a special Physx demo, called Supersonic Sled, on display at the CES. The rocket that is shown, isn't just animated physically correct, but also offers the appropriate smoke - the destructible obstacles are animated correctly, too. The demo supports DirectX 11 and 3D Vision.