NVIDIA Fermi: Architecture discussion

I'll be straight forward, it doesn't need that much power :p and its performance, drivers are raw and its performing well.......

You realize if you have real knowledge/data you're actually not permitted on this thread, right? :)
 
I call BS on the 300watt TDP for Fermi based GeForce single chip. That's what we are expecting for the X2 card.
I don't quite get it how some people really expect that kinda X2 card (or X2 card at all for that matter), when it's known fact that castrated Fermi GF100 (1/8th of the cores disabled) with what's expected to be lower than GeForce variant clocks is already at 225W TDP
 
Nope. I'm still going for the "it's either 30-35% faster than the HD 5870 or it's fail".
Would it be 30-35% faster, do you really think they would still not show pseudo-official numbers, say, by showing the fps counter on Heaven?

There are only 2 reasons to not show it, and none would really be welcome...

1- VERY bad PR staff.
2- It is "slow", or at least doesn't give a substantial performance advantage.

Even with a moderately low framerate (5870 level) they still could argue the drivers are not ready and that's why they don't ship today, so the first reason is quite proven anyway.
 
It simply means Nvidia doesn't have a magic wand to wave away the laws of physics.

Fermi being ~ 50% bigger than Cypress should roughly translate to 50% higher tdp - 188w x 150% = 282w ... as a rough baseline and assuming they need to put all of the chip to work. That's physics 101.

Fermi 'straddling' GPU and GPGPU means a large chunk of that real estate is non-optimized for pure GPU tasks.

Fermi competing with Cypress on cost performance means they have to wring the last bit of GPU specific performance they can from the chip ... maximum realizable clock speeds, drivers optimized to wring as much GPU performance from the GPGPU real estate as possible ... do they even have the option of doing otherwise, they absolutely HAVE to exceed the 5870 specs by some appreciable margin.

Add in it's a new and very complicated chip design on a new process and the time element, having to go with the earliest usable (non-optimized) silicon spin possible, why wouldn't Fermi be pushing the 300 watt envelope? Or even at 300 watts without the clock speeds being where they need to be?
 
Last edited by a moderator:
Fermi being ~ 50% bigger than Cypress should roughly translate to 50% higher tdp - 188w x 150% = 282w ... as a rough baseline and assuming they need to put all of the chip to work. That's physics 101.
:LOL:! They'd have to clock and power each transistor the same, which is absurd. Until NV make a statement about power, nothing should be read into power where it concerns the cooling solution or connector choice.
Fermi 'straddling' GPU and GPGPU means a large chunk of that real estate is non-optimized for pure GPU tasks.
:LOL: That's pure comedy.
 
It simply means Nvidia doesn't have a magic wand to wave away the laws of physics.

Fermi being ~ 50% bigger than Cypress should roughly translate to 50% higher tdp - 188w x 150% = 282w ... as a rough baseline and assuming they need to put all of the chip to work. That's physics 101.

Fermi 'straddling' GPU and GPGPU means a large chunk of that real estate is non-optimized for pure GPU tasks.

Fermi competing with Cypress on cost performance means they have to wring the last bit of GPU specific performance they can from the chip ... maximum realizable clock speeds, drivers optimized to wring as much GPU performance from the GPGPU real estate as possible ... do they even have the option of doing otherwise, they absolutely HAVE to exceed the 5870 specs by some appreciable margin.

Add in it's a new and very complicated chip design on a new process and the time element, having to go with the earliest usable (non-optimized) silicon spin possible, and why wouldn't Fermi be pushing the 300 watt envelope? Or even being at 300 watts without the clock speeds maxed out.


are you forgetting frequencies :LOL:, that must be in physics 201, ah rys beat me to it
 
Fermi being ~ 50% bigger than Cypress should roughly translate to 50% higher tdp - 188w x 150% = 282w ... as a rough baseline and assuming they need to put all of the chip to work. That's physics 101.

GT200b has 45% more transistors and it's 85% bigger than rv770 but have only a 28% higher tdp.
Physics law is a bitch...
 
Would it be 30-35% faster, do you really think they would still not show pseudo-official numbers, say, by showing the fps counter on Heaven?

There are only 2 reasons to not show it, and none would really be welcome...

1- VERY bad PR staff.
2- It is "slow", or at least doesn't give a substantial performance advantage.

Even with a moderately low framerate (5870 level) they still could argue the drivers are not ready and that's why they don't ship today, so the first reason is quite proven anyway.

Not really. They'll have their Editor's Day right after CES and your official numbers will be there, even if all of them under NDA, but I'm sure "someone" will eventually show them :)

As for why you say the first reason is proven, it's the same thing as before all over again "Damn if you do and damned if you don't". They show real numbers and they are accused of doing PR stunts without real avaailability. If they don't, it's because they are either slow or just bad at PR...

Plus, if drivers aren't up to speed yet, why should they show real numbers now, when performance will be even better in March ?
 
:LOL:! They'd have to clock and power each transistor the same, which is absurd. Until NV make a statement about power, nothing should be read into power where it concerns the cooling solution or connector choice.

Unless Nvidia made some unknown quantum leap in performance/sq mm in their design, the SIZE of the chip ROUGHLY equates to the power it uses if operating AT THE SAME PERFORMANCE LEVEL.

Or are you inferring Nvidia can design a chip the same size and performance as Cypress at the same process node, with 2/3 the tdp?
 
I dunno....everything Ive seen about fermi so far just hasnt been very convincing for a gpu thats had that kind of hype...r600 kind of hype that is. If I was Nv, I would be praising it like no tomorrow along with benchmarks that shows the world that they truly do have the worlds fastest GPU getting rdy to launch in a few months. Why wait, when their loosing customers everyday to the competition? Im sure reassuring us with real numbers would help more than harm....thats if fermi IS really all its cracked up to be.....unless....:oops:
 
Plus, if drivers aren't up to speed yet, why should they show real numbers now, when performance will be even better in March ?
To show potential customers they will offer a faster GPU in the near future?

Oh, sorry, I assumed it really was 30-35% faster, but since they don't show it, even via a subtle leak, it's probably not the case so they can't show it as it would be a PR stunt... :devilish:

Going very, very twisted, there's one last reason to not show performance now and it's they could want to up their image, but it doesn't make much sense anyway since they still have their bad practices going on.
 
GT200b has 45% more transistors and it's 85% bigger than rv770 but have only a 28% higher tdp.
Physics law is a bitch...

http://www.madshrimps.be/vbulletin/f22/amd-radeon-hd-5870-hd-5850-performance-chart-66465/

So it uses 28% more power to achieve the same performance. And has no GPGPU pretensions.

What would it's power use be if it had 45% more performance than the 4870? How about if it added 50% more transistors that had nothing to do with game performance?

Logic is a bitch.
 
I'll ask you again, was PhysX being used? Nothing I can discern in the video or audio indicates as such. First you state it as fact, then you say presumably, then you say it's a fact, again :???:

Are you asking for my opinion on whether PhysX was being used? The guy presenting the demo explicitly says that it is and I see no reason to believe he's lying. I'm not sure what further proof you want, maybe they should send you the code to review? :p
 
He does? I couldn't make that out. I hear him say "physics" a number of times, but never PhysX.

Oh come on :LOL: Yes I'm sure there's an infinitesimal chance somehow that an Nvidia GPU demo that features heavy physics is not using PhysX.

Also:

Nvidia confirmed to PC Games Hardware that there will be a special Physx demo, called Supersonic Sled, on display at the CES. The rocket that is shown, isn't just animated physically correct, but also offers the appropriate smoke - the destructible obstacles are animated correctly, too. The demo supports DirectX 11 and 3D Vision.
 
Back
Top