NVIDIA Fermi: Architecture discussion

Oh come on :LOL: Yes I'm sure there's an infinitesimal chance somehow that an Nvidia GPU demo that features heavy physics is not using PhysX.

Also:
Thanks. I was wondering if the demo was solely D3D11, which is why I was intrigued to know for sure, one way or the other. The fact the entire video passed without a single explicit mention of PhysX was a puzzler. No PhysX logo, etc.

Jawed
 
To show potential customers they will offer a faster GPU in the near future?

Oh, sorry, I assumed it really was 30-35% faster, but since they don't show it, even via a subtle leak, it's probably not the case so they can't show it as it would be a PR stunt... :devilish:

Going very, very twisted, there's one last reason to not show performance now and it's they could want to up their image, but it doesn't make much sense anyway since they still have their bad practices going on.

No company shows performance numbers when the products aren't 100% yet, unless they know it's going to fail against the competition and might as well release it and see how it goes (like R600).

Anyway, I'm sure that performance numbers are imminent.
 
So it uses 28% more power to achieve the same performance. And has no GPGPU pretensions.
Sometimes more complex scheduling can be a design choice for higher utilization, which is not just a GPGPU consideration.
The DP units and other widgets indicate it had at least some silicon set aside for GPGPU.

What would it's power use be if it had 45% more performance than the 4870? How about if it added 50% more transistors that had nothing to do with game performance?
Performance-wise, you would need to isolate factors that can contribute to performance in a wide variety of scenarios. In ALU-limited situations where RV770 could put its math advantage to work, GT200 would have needed a lot more power to get a 45% performance lead.

You would need to cite the source of your numbers. I am reluctant to say that Fermi's additional 50% in transistor count is devoted solely to GPGPU. I would say a lot of its enhancements serve multiple purposes, and it has devoted quite a bit of room to texturing and ROPs in keeping with its higher transistor count.
 
http://www.madshrimps.be/vbulletin/f22/amd-radeon-hd-5870-hd-5850-performance-chart-66465/

So it uses 28% more power to achieve the same performance. And has no GPGPU pretensions.

What would it's power use be if it had 45% more performance than the 4870? How about if it added 50% more transistors that had nothing to do with game performance?

Logic is a bitch.
But we're comparing Fermi with Cypress, here, not RV770. So instead imagine RV770 had been 30% slower, and 334mm^2 instead of 256 mm^2.
 
GPGPU is part of D3D11, so the non-graphics overhead in GF100 is going to be very low. Bits of the double-precision hardware (not even all of it) + ECC.

Jawed
 
No company shows performance numbers when the products aren't 100% yet
AMD's Computex presentations left the FPS counters on screen most of the time.. so.. your point?


unless they know it's going to fail against the competition and might as well release it and see how it goes (like R600).

I think you might find that R600 was "worked" on longer than Fermi is late so far. Ask around and you might find out how long they worked on it before they released it in its final configuration.
 
Since neliz is hinting that Fermi power consumption and/or performance might be known on the 14th (in his usual, "I know stuff" manner :)) I'm assuming there are NDA slides floating around with that date on it?

Based on other hints people are dropping said slides explain features and architecture but no specifics on final clocks? Although it baffles me how they can have performance numbers before they know what clocks are going to be.
 
Since neliz is hinting that Fermi power consumption and/or performance might be known on the 14th (in his usual, "I know stuff" manner :)) I'm assuming there are NDA slides floating around with that date on it?

Based on other hints people are dropping said slides explain features and architecture but no specifics on final clocks? Although it baffles me how they can have performance numbers before they know what clocks are going to be.

slides say 700Mhz I thought?

You mean the fake numbers? :LOL:
Not Cypress numbers, they just used lower parts.
 
Unless Nvidia made some unknown quantum leap in performance/sq mm in their design, the SIZE of the chip ROUGHLY equates to the power it uses if operating AT THE SAME PERFORMANCE LEVEL.

Or are you inferring Nvidia can design a chip the same size and performance as Cypress at the same process node, with 2/3 the tdp?

:?::LOL:

I don't see what you are talking about. The chips might use the same process, but the power distribution in the chip and frequencies are all different, you have to take that into account if you want to talk about over all power consumption.
 
To show potential customers they will offer a faster GPU in the near future?

Oh, sorry, I assumed it really was 30-35% faster, but since they don't show it, even via a subtle leak, it's probably not the case so they can't show it as it would be a PR stunt... :devilish:

Going very, very twisted, there's one last reason to not show performance now and it's they could want to up their image, but it doesn't make much sense anyway since they still have their bad practices going on.


subtle leak, what do you think "Fermi will be faster" means? They have been saying that or something along those lines since the launch of the rv870.
 
subtle leak, what do you think "Fermi will be faster" means? They have been saying that or something along those lines since the launch of the rv870.

on a clock per clock basis? Ouch if true, but explains the special cases... :(
 
Would it be 30-35% faster, do you really think they would still not show pseudo-official numbers, say, by showing the fps counter on Heaven?

There are only 2 reasons to not show it, and none would really be welcome...

1- VERY bad PR staff.
2- It is "slow", or at least doesn't give a substantial performance advantage.

Even with a moderately low framerate (5870 level) they still could argue the drivers are not ready and that's why they don't ship today, so the first reason is quite proven anyway.

Yeah, there is no reason for them to show any performance numbers what so ever til actual launch time. Seem the only people demanding numbers are those hoping Fermi fails. Kinda like you.
 
Yeah, there is no reason for them to show any performance numbers what so ever til actual launch time. Seem the only people demanding numbers are those hoping Fermi fails. Kinda like you.

What kind of baloney is that? I'm hoping Fermi knocks it out of the park and would love to see some benchmarks.

Paranoid much? :)

Let's not make this thread about ATI vs. NV (again) but about cool new tech.
 
http://www.madshrimps.be/vbulletin/f22/amd-radeon-hd-5870-hd-5850-performance-chart-66465/

So it uses 28% more power to achieve the same performance. And has no GPGPU pretensions.

What would it's power use be if it had 45% more performance than the 4870? How about if it added 50% more transistors that had nothing to do with game performance?

Logic is a bitch.

Good grief man, 4870 is on par with the GTX260 216 and slower than the GTX285 was is about 25-30% faster. The 4890 was redone and offers performance on par with the GTX285. But again, 45% more die space and yet as the other person pointed out, only 28% more TDP.

Why is it most people posting in this thread display such love for ATI or hate for Nvidia?
 
Back
Top