NVIDIA GF100 & Friends speculation

Yes but by the time Fermi is released, there may well be an HD 5890 (or whatever it may be called) which would essentially be an HD5870 at 1GHz. So the comparison makes sense...

Until such a card comes to be, the comparison is not valid as you are using an OC'd card to compare to a non OC'd card. Its the same screwed up philosophy people used to compare the 4870X2 to the GTX285, hell even the 5870 compared to either the 4870X2 or the GTX295.
 
Until such a card comes to be, the comparison is not valid

By that same logic, there should be no comparisons with the GF100/Fermi because it is not commercially available. :rolleyes:
 
By that same logic, there should be no comparisons with the GF100/Fermi because it is not commercially available. :rolleyes:

Here's the problem, we Know the GF100 is coming and details are emerging, tomorrow all kinds of things will be known about it. Does anyone here cept for Charlie and his Anti-Nvidia posting, know thing one about ATIs refresh or possible release date?
 
Don't know where your X15k comes from, but obviously not from the same source as the P22k.


Source is the same for both numbers - neliz (he posted them at XS). No idea where he got them from.

Anyway, without testbed details those numbers are meaningless. With a decently clocked i7 920 a 1GHz HD 5870 scores 21300 on Performance, not 17k. ;)

Not sure how that's relevant. Was just pointing out to PSU-failure that his assumption that the Extreme gap would be smaller than the Performance gap didn't hold true for the numbers given. Use your judgment as to the reliability of those numbers.
 
Until such a card comes to be, the comparison is not valid as you are using an OC'd card to compare to a non OC'd card. Its the same screwed up philosophy people used to compare the 4870X2 to the GTX285, hell even the 5870 compared to either the 4870X2 or the GTX295.

Yeah well, comparing the HD 5870 to Fermi is comparing an available card to one that isn't. I maintain that the comparison with the "hypothetical" 5890 makes sense.
 
It's a revolutionary new design oriented toward tessellation (those pesky triangles) and geometric programming. Problem being every wire frame is made up of triangles, tessellation takes those triangle and breaks them down into many smaller triangles. This core is uniquely designed to handle that so geometric and shader heavy games you will see more than the 100% raw power increase.

Surely that's going to be embarrassing for AMD having Nvidia beat them at their own technology? What's it going to mean for games if dev's start using higher polygon counts which will slow AMD cards down a lot more than Nvidia ones?

I wonder why AMD didn't do something similar when working on their DX11 card.
 
Is there a way to put AIB/IHVs under an NDA to not be able to say they do NOT have any chips yet? I'd think they could only not talk about it after they had chips. :???:

Given the loving an mentoring nature of NV, I would expect them to deal harshly with any info given out, NDA or not. Look at what they did to Zotac over the 'leak' about the 250 to Anandtech. Given that I sent him the emails, and I know where I got it, I can say with certainty that NV's punishment system is rather arbitrary and meant more for keeping 'close friends' in fear of them than keeping info from getting out.

That said, they suck at keeping secrets, of late AMD is MUCH better. There is more than a little irony there.

As for them saying something, or not saying something, it doesn't matter. NDA or not, if they do something that embarrasses NV, they will have sh*t land on them. I have seen it a dozen times in the past. In this example, they might just get their samples delayed two weeks 'accidentally', or more likely, their allocation cut to basically zero.

Gotta love 'partners' that do things like that. Then again, they are still telling AIBs that TSMC55nm shortages are to blame for lack of GT200bs. :)

-Charlie
 
Yes but by the time Fermi is released, there may well be an HD 5890 (or whatever it may be called) which would essentially be an HD5870 at 1GHz. So the comparison makes sense...
Comparison doesn't make that much sense if the P score comes from NV, as they surely give a best case scenario score... so with PhysX acceleration enabled and the insane CPU score implied.

With this in mind, P22k for a "GTX380" seems mediocre as that's ~18k on the P graphics score, less than 10% higher than a 5870.

X15k on the other hand would imply alien technology used everywhere, even more if associated with P22k.
 
It's amazing to think that no one's broken the 1-tri/clk barrier since the Voodoo 1, and whereas texturing, rasterizing, and ALU performance has gone through the roof, geometry has been tied inherently to clocks and followed more of a linear or quadratic growth. That's what, 13 years ago IIRC?
It's because in general games didn't push the triangle limit and resolutions have increased a lot over those years so the focus has been on pixel power. Aside from resolution I suspect this is because of the extra development effort to create a range of models. Maybe the scalability of tessellation will change that. Or EyeFinity and Nvidia's 3D Surround will catch on and pixels/clock will be even more necessary. We'll see...

Take a look here : http://www.behardware.com/articles/723-4/product-review-the-nvidia-geforce-gtx-280-260.html

The results of RightMarks' VS tests point to a 0.5 triangle/clock setup rate for RV670 whichs scores ~270 at 775MHz while R600 scores ~600 at 750MHz. RV770 scores ~650, probably due to a partial only setup limitation.
I don't know what's going on in that test, but RV670 has a setup rate of 1 tri per clock.
 
Surely that's going to be embarrassing for AMD having Nvidia beat them at their own technology? What's it going to mean for games if dev's start using higher polygon counts which will slow AMD cards down a lot more than Nvidia ones?

I wonder why AMD didn't do something similar when working on their DX11 card.
Perhaps you should consider all metrics before coming to such conclusions. i.e The performance differences being talked about, how do they measure up in terms of die size difference/no of transistors, board powers etc., etc... ;)
 
There's an interesting thing happening in forums with these revelations happening. Months ago, there was much optimism and props given to AMD for their focus on tessellation in DX11, and from that came the assumption that NVidia put no work into it, and if they supported it at all, it would be some late additional, half-assed, bolted-on, or emulated tessellation and would not perform as well as AMD's. I'll note for the record that much the same story was repeated with Geometry Shaders (speculation that NVidia would suck at it, and that the R600 would be the only 'true' DX10 chip) AMD has had some form of tessellation for several generations all the way back to N-patches, so there's some logic to these beliefs. Also, the Fermi announcement mentioned nothing about improvement to graphics (only compute), so there has been a tacit assumption that the rest of the chip is basically a G8x with Fermi CUDA tacted on.

But as more and more leaks seem to indicate that NVidia has invested significant design work into making tessellation run very fast, it seems like some are in disbelief, while others are now starting to downplay the importance of tessellation performance and benchmarks (whereas once it was taken for granted that this was AMD's strong point) If indeed NVidia has significantly boosted triangle setup, culling, and tessellation, this could be like G80 all over again, where the total lack of information caused people to assume the worst, and the final chip coming as a big surprise. I think they deserve much props if they did increase setup rate.

As Mint said, it's been far too long to leave this part of the chips unchanged. Setup seems exactly where it was 10 years ago.

They changed stuff all right, but I am unconvinced it is for the better. GF100 will be the best case, things only scale DOWN from here. With the ATI route, they are fixed, and you have a set target to code against.

-Charlie
 
Yup.. from the Nv3x to G80.. geometry setup has improved what? By a factor of two? Improving geometry setup is something that needs to be done... So did Fermi do that? ;)

Not if you count units. :) That said, I think they did, but the real question is whether they did it by enough, especially for derivatives.

-Charlie
 
They changed stuff all right, but I am unconvinced it is for the better.

What would Charlie Demerjian's ideal GPU architecture look like?

They changed stuff all right, but I am unconvinced it is for the better. GF100 will be the best case, things only scale DOWN from here. With the ATI route, they are fixed, and you have a set target to code against.

That's an interesting statement. So having the same geometry performance in entry level and flagship parts is now supposed to be a good thing? I think it's going to be hard even for you to downplay what Nvidia seems to have done here.
 
Perhaps you should consider all metrics before coming to such conclusions. i.e The performance differences being talked about, how do they measure up in terms of die size difference/no of transistors, board powers etc., etc... ;)


Most gamers really could care less about how "big" the chip is.
 
Most gamers really could care less about how "big" the chip is.

Not directly, but they do care about cost, and whether or not it will work with their existing system. I'm hoping it's not over $500 for that beast, and that my Corsair 620w PSU can power it.
 
Perhaps you should consider all metrics before coming to such conclusions. i.e The performance differences being talked about, how do they measure up in terms of die size difference/no of transistors, board powers etc., etc... ;)

Err, depends on how one spins it, correct? GF100 has ~ 3.1 billion transistors, while the HD 5970 has ~ 4.3 billion transistors. In other words, HD 5970 has ~ 40% MORE transistors than GF100. So who is really more efficient than who? :)
 
Not directly, but they do care about cost, and whether or not it will work with their existing system. I'm hoping it's not over $500 for that beast, and that my Corsair 620w PSU can power it.

Well Nvidia has shown they are capable of being competitive in cost if the performance delta demands it.
 
They changed stuff all right, but I am unconvinced it is for the better. GF100 will be the best case, things only scale DOWN from here. With the ATI route, they are fixed, and you have a set target to code against.
-Charlie
Maybe you could explain how their proposed change could anyone affect for worse then?

I was under the impression that being able to alleviate perceived bottlenecks is what it's all about - an somehow Nvidia seems to have allegedly identified trisetup/tessellation as their main bottleneck whereas AMD went with doubling raster, FLOPS and texturing rates.
 
Most gamers really could care less about how "big" the chip is.

Most gamers pay their own electricity bills and most of them buy cards or computers with graphics cards without a PCI-E power connector if going by volume. Im sure even enthusiasts are starting to notice an increase in their energy bills. Performance per watt is becoming an even more important metric than performance per mm^2.
 
Back
Top