NVIDIA GF100 & Friends speculation

Some post-modern GPU-Z art:

gtx480bu3r.png


:LOL:
 
Some post-modern GPU-Z art:
Surely the die size is not 621mm2, but rather a very misguided estimate...

The memory speed at 900MHz base clock might possibly be legit, since that is actually possible to read electronically (which die size for obvious reasons is not. ;))
 
That's probably just due to an immature BIOS or something. There's no reason it should draw (and therefore dissipate) much power in idle, especially since the GTX 280 and 285 already have fairly reasonable idle power.

I fully expect NVIDIA to improve on that with this generation, just as AMD has.

Unless of course, there is something about the minimum voltage required to keep the chip running?

edit: and :runaway: about everyone discussing fake screenshots.
 
Pretty much a possible fit for a 621mm² GF100 die on the 41x41mm BGA substrate. Sure it will be a tight fit for the SMD elements, but I think there would be enough room for them and the IHS contact perimeter:

67471950.jpg
 
Unless of course, there is something about the minimum voltage required to keep the chip running?

edit: and :runaway: about everyone discussing fake screenshots.


to spill the beans:

Yep, Fake also posted on the 3DCenter.de Forum. But there, one forum member, Neocroth quickly found the wrong Device ID. The ID is from a GTX275 Lightning.
 
Unless of course, there is something about the minimum voltage required to keep the chip running?

edit: and :runaway: about everyone discussing fake screenshots.

Hmm I don't know why that would be, but I don't know much about processes so let's say that's the case.

NVIDIA can still clock down significantly in idle and presumably lower voltage at least a little. Running the GPU at 40% of its maximum clock and 90% of its maximum voltage would make it draw 97.2W, assuming a 300W TDP. Surely, if the cooling system can handle 300W without perforating your eardrum, it should be able to dissipate ~100W fairly silently.
 
to spill the beans:

Yep, Fake also posted on the 3DCenter.de Forum. But there, one forum member, Neocroth quickly found the wrong Device ID. The ID is from a GTX275 Lightning.

Apart from that, there's also the BIOS number that appeared strange to me and well... if they made it like 600mm², I bet Charlie would be laughing his ass off ;)
 
There are three 1920x1200 displays here, probably with AA8X. I'm not convinced the benefit from 2GB of memory would be so significant on a single 2560x1600 display.

2560x1600 = 4MP
1920x1200 * 3 = 6.9MP

Yeah it's 50% more pixels, but it's a lot closer than people think (people are always shocked to realize that 2560 x 1600 is nearly 2x more pixels than 1920x1200).

So with extreme textures and AA, it definitely won't hurt
 
Actually it's 68% more pixels. Quite a difference. But I agree, there are probably cases where 1GB isn't enough for a 4MP resolution, especially with AA, lots of intermediate render targets and HQ textures in the mix.
 
1/How did GTX 4X0" kicked" 5870's A*s, DX11 Tessellation image Qualtiy Comparing

http://bbs.pczilla.net/attachments/month_1003/1003060258f6b169a3b027b16b.gif

2/5870 kicked NVIDIA 480's A*s, DX11 Without Tessellation Performance

http://translate.google.com/transla...diy/11079993.html&sl=zh-CN&tl=en&hl=&ie=UTF-8

Not sure what is this about? Guy is trying to prove HD5870 is faster than GTX480 without tessellation? Kinda pointless not using tessellation in a tessellation benchmark ;)
 
Last edited by a moderator:
Where is the picture with tessellation? Oh right, the card would be a lot slower. :LOL:
So "DX11 doesn't matter"?

Dx11 surely does matter, but what do you think the average joe is going to buy when he sees the 5870 trouncing the 480 in games?

If ATI's tesselation was ahead of it's time 3 years ago, nVidia's tesselation is closer to the time now. But guess what? It's still too early, and at the best case nVidia has put too much into tesselation and not enough into the current demands of gaming.

The 480 is almost certainly losing to the 5870 in some games, and vs non-reference models, ie 1ghz core versions it is going to get thumped hard in most of the current gaming benchmarks.

And that, is why we keep seeing Unigene and Far Cry 2 and not much else.
 
Where is the picture with tessellation? Oh right, the card would be a lot slower. :LOL:
So "DX11 doesn't matter"?
With tessellation off the engine is still running in D3D11 mode. Not sure why GTX480 isn't any faster than HD5870 running the apparently "slower" version of this benchmark.

There may not be any difference in performance between versions 1.0 and 1.1, though, in tessellation-off rendering. If the only difference in these versions is the "culling" for tessellation-on mode, then the comparison is valid.

Of course up-coming reviews will be based on different drivers for both cards.

Still, it seems reasonable to expect GTX480 to be faster here.

Jawed
 
So "DX11 doesn't matter"?
Ok let's get one thing straight here... DX11 != tessellation! There's a lot more in DX11 than just tessellation... in fact I would say DirectCompute is a lot more important and will be way more commonly used than tessellation.

Also I have my doubts as to whether that benchmark really represents a "good"/typical tessellation workload. If you look at the wireframe it leaves a lot to be desired in terms of consistent triangle sizes, adaptive and distance-based LOD, etc. FWIW in the only games that I know to date to include tessellation (Dirt 2 and AvP) there is very little performance hit (if any) to enabling it on the 5870.

That said, it's great that NVIDIA's tessellation performance looks to be that good, but I hope they didn't sacrifice anything in other areas for it.
 
Still, it seems reasonable to expect GTX480 to be faster here.
Jawed

That's a good question. I know that a GTX285 gets 45 FPS with DX10 at the same position. So the GTX480 would be only 25% faster with DX11 and without Tessellation...

Ok let's get one thing straight here... DX11 != tessellation! There's a lot more in DX11 than just tessellation... in fact I would say DirectCompute is a lot more important and will be way more commonly used than tessellation.

Yes yes. But we know nothing about GF100's DC performance. All this DX11 downplaying is a result of GF100 tessellation performance.

Also I have my doubts as to whether that benchmark really represents a "good"/typical tessellation workload. If you look at the wireframe it leaves a lot to be desired in terms of consistent triangle sizes, adaptive and distance-based LOD, etc. FWIW in the only games that I know to date to include tessellation (Dirt 2 and AvP) there is very little performance hit (if any) to enabling it on the 5870.
AMD has the only tessellation hardware for developer in the last 6 months. It seems normal that all games with tessellation show not very much tessellation (AVP or Dirt 2). Metro2033 could be a gamebreaker because of nVidia's help.
 
Back
Top