GF100 evaluation thread

Whatddya think?

  • Yay! for both

    Votes: 13 6.5%
  • 480 roxxx, 470 is ok-ok

    Votes: 10 5.0%
  • Meh for both

    Votes: 98 49.2%
  • 480's ok, 470 suxx

    Votes: 20 10.1%
  • WTF for both

    Votes: 58 29.1%

  • Total voters
    199
  • Poll closed .
I don't believe the rated typical design power is a description fitting the current crop of tested and released GTX cards, I believe they reflect the wafers put through this year as the early wafers even with tweaks still reflect most of the TSMC 40nm problems which have subsequently been resolved. Once the fresh batches of wafers come through im pretty sure the TDPs seen for those cards will be close to what we expect from their respective rated TDPs.
 
Ati still has a ways to go with their crossfire. Frankly at this point, it's not excusable. SLI tends to scale well with most games where as Xfire still proves to be a hit or miss.

On the flipside they do have the multi monitor gaming support that nVidia promised but didn't deliver, and they aren't frying cards with hastily released drivers either.

It should be interesting to see how hard nVidia's driver teams are taxed with SLI over multiple screen eye-nfinity type gaming.

If you look at the whole situation it makes you realise how hard ATI has been trying, and how far behind nVidia still is.
 
And GF100 will look fairly prescient as an architecture, albeit flawed as a product.
Fixed-function pipelines are on borrowed time. Once the GPU reaches ~80% ALUs (or whatever the number turns out to be) fixed-function stuff will be on the cusp of disappearing all together, I reckon.

So, the fun bit is: when does that happen?

I guess Fermi's in the region of 40%, hard to separate the ALUs from the other stuff in the GPCs though, could be 30%. Cypress got substantially bigger than expected, so it'll be lower I expect - RV770 is around 29% (though some ALU control/scheduling hardware is prolly missing from that assessment - and I've got no way of refining that).

Generally ALUs aren't limited by bandwidth, so while bandwidth curbs things like ROP/TMU area increases - with the caveat that bandwidth-efficiency measures cost extra in area - ALUs are relatively free to breed like rabbits.

GF100 apparently ditches some "fixed function" stuff by generalising buffers within L1 and L2. A lot of what's described as "fixed function" in the PolyMorph Engine looks ripe for running as a kernel in my view. I'm mystified why they aren't kernels in GF100.

I'm not convinced the "geometry focus" is going to win NVidia any significant medium- or long-term advantage. This isn't like PCF and fundamentally there's no reason to expect ATI will continue to scrape the barrel. And if Larrabee works some magic within the next couple of years, well, fixed function stuff is going to look super-dated.

Jawed
 
Fixed-function pipelines are on borrowed time. Once the GPU reaches ~80% ALUs (or whatever the number turns out to be) fixed-function stuff will be on the cusp of disappearing all together, I reckon.

Tex Samplers will remain. Will likely always remain. Their operation and requirements for the majority of workloads are just too different from the computation side.
 
GF100 apparently ditches some "fixed function" stuff by generalising buffers within L1 and L2. A lot of what's described as "fixed function" in the PolyMorph Engine looks ripe for running as a kernel in my view. I'm mystified why they aren't kernels in GF100.

I'm not convinced the "geometry focus" is going to win NVidia any significant medium- or long-term advantage. This isn't like PCF and fundamentally there's no reason to expect ATI will continue to scrape the barrel.

I 100% agree wrt the impending death of fixed function units, in fact this assumption makes me more confident GF100 will end up looking rather prescient, as it is a more flexible architecture than Evergreen, specifically due to its memory subsystem. In order for AMD to fix its geometry performance, I think they're going to have to ditch their fixed function tesselator and move to a more general purpose solution - probably including caches similar to GF100 in order to avoid storing and loading huge amounts of tesselated geometry to and from DRAM. My point is that when AMD makes that change, their perf/W and perf/mm^2 on DX10 games will probably be reduced.

All of this is just to justify why I'm not judging GF100's architecture based on DX9/10/early DX11 benchmarks. I think the future of gaming workloads looks rather different from DX10 rendering, and I think GF100 has made some expensive tradeoffs to be ahead of the curve. Time will tell.

(Although as I said earlier, I don't buy a card on guesses of how advanced its architecture might seem in a few years... ;))
 
Well, I definitely think that the GF100 is ahead in terms of tech compared to ATI's DX11 parts, but they clearly made some mistakes in implementation that have cost them.
 
That's flawed logic. Let's apply it to cooking.

"This roast is burnt, and tastes terrible."

"Are you a cook? In what way is it burnt?"

Its not flawed at all. If he has some engineering reason as to why it is flawed, point to where and why, otherwise he is just assuming it is flawed because of the high heat/power.
 
Well, I definitely think that the GF100 is ahead in terms of tech compared to ATI's DX11 parts, but they clearly made some mistakes in implementation that have cost them.

IF GF100 was 50% smaller and 100w worse off, do you think it would still be equal in Crysis? Or how far behind would it be in BF2?

As a pure gaming gpu, it's not even close. I'm not sure what you think it has that puts it ahead in terms of tech. What does it have that ATI couldn't do a lot better if they had 50% more area and 100w more power to play with?
 
IF GF100 was 50% smaller and 100w worse off, do you think it would still be equal in Crysis? Or how far behind would it be in BF2?
The main mistake appears to have been with the memory controller. They may also have been able to do better with a smaller part built for power consumption, and relied on a dual-GPU card for max performance.
 
That's for another thread clearly...

Also, noisier than anything ever made ? I'm guessing you forgot the GeForce FX's cooler ?
Availability was already known to be pushed to April, so what's your point ? Also source on it "being planned to compete with dual-GPU part". Is it you guessing or was this ever confirmed ? As far as I remember, most rumors hinted at a dual GPU Fermi based card to be released as well, and that's the one that would take on the HD 5970. Given the power constraints, that certainly won't happen soon.

As for what exactly is better: performance. Most people here that just love to cite certain "articles", were expecting the GTX 480 to match or slightly outrun (5%) the HD 5870. That isn't the case. It's better and from a GPU that's not even fully enabled, given the problems it had. It was certainly a complicated birth :)

Now power is definitely an issue. It consumes way too much for its performance and that's what's certainly going to be tackled on the next iteration and derivatives.

LOL Silustroll makes me LOLIRL.. REALLY...

"That's for another thread clearly..." .. yes thats for another thread. except you are the one who brought the initial point up... with the post "I guess the "other" campaign that tried to make GF100 worse that it turned out to be, has nothing to do with the bleak expectations for GF104 ?"

"Also, noisier than anything ever made ? I'm guessing you forgot the GeForce FX's cooler ?" .. well the claim to noisiest cooler ever may be for some debate (until someone posts decibal tests for both the FX and the GTX) however the crown clearly bleongs to NVidia.

"Availability was already known to be pushed to April, so what's your point ?" You mean that HARD Launch you proclaimed the GTX400 series was going to be ?? Apparently someone still hasn't figured out the difference between soft/hard and paper launches yet..

"Also source on it "being planned to compete with dual-GPU part". Is it you guessing or was this ever confirmed ?".. IIRC NV themselves said the Fermi based Geforce would be the fastest video card EVER..

"As far as I remember, most rumors hinted at a dual GPU Fermi based card to be released as well, and that's the one that would take on the HD 5970".. yeah I guess that would be the dual fermi based product that was supposed to launch "shortly after the GF100 initial launch" back in November and in time for the holiday season.. unless they meant THIS upcoming holiday season...

"Given the power constraints, that certainly won't happen soon.".. oh yeah those supposed power constraints that you and your fellow greenies claimed a certain other less than accurate website's numbers were totally off (despite being almost on the dot according to mosts results so far).. and don't you dare plug in two displays to one card unless you have a certified windtunnel case.. unless you like 80C+ idle temps.. just imagine what 3D Surround will bring..

.. so on and so on.. like shooting fermis in a barrel..
 
Not this again...Why do some feel the need to try and justify that guy's wrongs, by twisting everything into a "reality" that doesn't exist ?

The GTX 480 was never meant to compete with the HD 5970. Never did NVIDIA counter a dual GPU with a single GPU and this time, it's not different. Even more so, because of the delays.

Yeah I guess you were right.. and NV never meant to say stuff like this:

Facebook (google cache): "NVIDIA The wait is almost over! The world's most anticipated--and fastest--PC graphics gaming technology ever created will be unveiled at PAX 2010"

"GF100 the world’s fastest and most innovative consumer graphics ever built."

http://www.donanimhaber.com/Nvidia_Fermi_mimarisi_ile_en_hizli_grafik_kartlarini_sunacagiz-17028.htm
" Fermi based GPU (GF100) will be fastest GPU solution in the industry"

IIRC, NV said that the GF100 was to be twice as fast as the GTX 285, which would have put it within range of the 5970. It was around October(most notably right after the 5970 launch) that NV's tune of Fastest Graphics Card ever suddenly changed to Fastest GPU Ever,,
 
I'd be interested in hearing people's perspective on Nvidia's claims that the future of real-time graphics is more in geometry than pixel shaders. Of course, it's hard to predict the future, but it does seem to be the case that film-quality 3d rendering makes beautiful pictures by rasterizing huge numbers of tiny polygons. If we believe that the future of real-time graphics is to approach film rendering, than GF100's emphasis on geometry over pixel shaders seems justified.
I'm not sure where Nvidia claimed geometry is more important, but shaders are very important in film rendering (including REYES) so regardless of whether or not they are per pixel shaders they will continue to take up a significant amount of render time.
 
Well, I definitely think that the GF100 is ahead in terms of tech compared to ATI's DX11 parts, but they clearly made some mistakes in implementation that have cost them.

Faster GPU does not equal to the affordable gpu series in the following months.
 
That's true, but IIRC REYES relies on sub-pixel tessellation/micropolygons, which causes inefficiencies in current GPUs, so total ALU flops alone is not a good measure of how well you'll perform on REYES. (e.g. even if 90% of the time is spent in shaders, you could end up with inefficient pipeline usage where big chunks of the chip are idle waiting for work)
 
Chalnoth said:
The main mistake appears to have been with the memory controller. They may also have been able to do better with a smaller part built for power consumption, and relied on a dual-GPU card for max performance.
Even if it isn't as fast as they would like, bandwidth certainly doesn't seem to be an issue. I'd say the main mistake was their implementation of half rate DP.
 
If I was nv's ceo I would get the guy who thought 3 monitor gaming would be better if it required 2 cards and shoot him in the head...
along with the pr department
 
Even if it isn't as fast as they would like, bandwidth certainly doesn't seem to be an issue. I'd say the main mistake was their implementation of half rate DP.
That's possible, but it does depend upon how much die space that particular feature took up.
 
If I was nv's ceo I would get the guy who thought 3 monitor gaming would be better if it required 2 cards and shoot him in the head...
along with the pr department
Obviously that particular feature was tacked on later after ATI released their own product. Obviously if they actually intend on supporting it, they're going to release cards down the road that support at least 3 displays on just one card.
 
Back
Top