NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
100GFLOPS @ 3870? I recall Stanford guys saying how they get close to theoretical max out of the HD Radeons, which should translate close to 500GFLOPS, no?
Let's wait and see. It could be that 3870 reaches 250 GFLOPS of useful calculations, but 500- I highly doubt it.

The graph is clearly somehow f#cked up anyway
If you're talking about graph on 4.jpg, then it seems to be FLOPS produced by computation unit, normalised by PS3. You can glance at the stats here. Until NDA lifts, we can only guess whether the last bar indicates the actual measured or optimistically projected performance ;).
 
Last edited by a moderator:
Let's wait and see. It could be that 3870 reaches 250 GFLOPS of useful calculations, but 500- I highly doubt it.

If you're talking about graph on 4.jpg, then it seems to be FLOPS produced by computation unit, normalised by PS3. You can glance at the stats here. Until NDA lifts, we can only guess whether the last bar indicates the actual measured or optimistically projected performance ;).

Normalized by PS3 or not, when all Radeons are averaging over double performance that of PS3, it's clearly wrong
 
Well, a Folding@Home Cuda client will finally give us some kind of benchmark to compare GPGPU performance between AMD and nVidia. By the looks of that graph... unless GT200 becomes a monster GPGU, G80/G92 should also be quite a bit faster than the 3870.
 
Well, a Folding@Home Cuda client will finally give us some kind of benchmark to compare GPGPU performance between AMD and nVidia. By the looks of that graph... unless GT200 becomes a monster GPGU, G80/G92 should also be quite a bit faster than the 3870.

That graph is definately screwed, look what I said above, according to that graph, HD3870 has under twice the performance of PS3, however, based on Stanfords stats, all Radeons, which have most likely bigger portion of cards under HD3870 performance, already have over twice the performance of PS3 (per GPU)
 
If RV770's 480 ALU lanes can put together ~1 TFLOP it'll certainly be an interesting comparison! They'll both have roughly the same bandwidth.

Then R700 comes along with ~double the FLOPs and bandwidth. Hmm... this is going to be fun.

Jawed
 
Since 9800GTX is bandwidth-limited how will G92b be faster? How much faster can they run the GDDR3?

Jawed

If they'd get into the trouble to further increase frequencies they'd also add GDDR5 compliance. Even "underclocked" 1.6GHz GDDR5 would give enough bandwidth for such a GPU.

If RV770's 480 ALU lanes can put together ~1 TFLOP it'll certainly be an interesting comparison! They'll both have roughly the same bandwidth.

Well it isn't always FLOP!=FLOP.
 
GTX 280 doesn't hit 1 Tflop. For the moment it's stuck somewhere in between 900Gflop and 950Gflop. ;)

1.3 GHz ALUs? Hmm. Seems a bit low even for this monster chip.
If it was calculated with 2 FLOPS, though, it's almost 2 GHz, which is a bit high.
I was expecting something in the 1.5 Ghz range...
 
That graph is definately screwed, look what I said above, according to that graph, HD3870 has under twice the performance of PS3, however, based on Stanfords stats, all Radeons, which have most likely bigger portion of cards under HD3870 performance, already have over twice the performance of PS3 (per GPU)
Taking Stanford's stats and dividing Current TFLOPS / Active doesn't give you any useful number. They don't give any indication how many clients contribute to Current, and Active is defined as:
Active GPUs are defined as those which have returned WUs within 10 days (due to the shorter deadlines on GPU WUs). Active PS3's are defined as those which have returned WUs within 15 days.
 
I was just wondering if GT200's shaders could be indeed 2 FLOPS, and if this could cope with the rumored +50% "shader efficiency", as throwing out one MUL ALU and using the MADD ALU for SF too could increase efficiency in term of "FLOP per ALU", even if this seems a bit strange to me...
 
First link has some surreal pic of Pope John Paul II with the heading CUDA over it. No clue what that's about.

The rest of the poster is in Polish, and "cuda" means miracle in Polish. So it's probably a real poster that had nothing to do with Nvidia, but makes for a pretty funny Editor's Day slide.
 
The legitreview forcedly shopped slide proves it was the next gen part then?

PS. nm, it had to be SLI or next gen period.
 
Last edited by a moderator:
More pics:
http://www.legitreviews.com/article/712/1/
http://www.legitreviews.com/article/713/1/

First link has some surreal pic of Pope John Paul II with the heading CUDA over it. No clue what that's about.

The second link has a clearer shot of F@H running, showing "Performance: 440ns/day" and that Jensen is part of "Team WhoopAss".

Notice that, even though the name of the Geforce card is blurred out in the second link, it is easy to make out that it says "_ _ _[space]_ _ _", and it looks it must be Geforce GTX 280, as others have noted.

Was Mike Houston the guy who used to lead the Folding@Home project at Stanford? Interesting that soon after he leaves for AMD, there is an announcement about Folding with NVIDIA GPU's...
 
Last edited by a moderator:
Interesting that soon after he leaves for AMD, there is an announcement about Folding with NVIDIA GPU's
I take ths as a sign that CUDA now is a lot more stable platform than a year or two years ago. If your remember, Stanford stated about a year ago that they had some problems getting F@H running reliably on NVidia cards, but they are working with NVidia resolving it. And now, it seems, their efforts are bearing fruits.
 
CJ seems to confirm the 930 GFlops rumours we heard from China.
As I don't believe in only a low 1.3G shader clock, I'm believe more in a 1.9375 Ghz shader clock.
This translates (assuming 1:2.5 core/shader clock ratio as most G92 cards) into a suspect nice and rounded number: 775 Mhz core clock.

In short I'm betting at 775/1938 clocks for GTX 280.
For the memory I would bet 2200 Mhz.

I don't think they are unbelievable, in fact there is a G92 product shipping at these clocks : EVGA e-GeForce 9800 GTX SSC.

For the GTX 260, we don't have rumours but I would bet for clocks near or exactly like 9800 GTX 675/1680.

Anyone more want to bet on clock speeds?
And CJ, thanks for the confirmation on Gflops, and would you like to bet? :LOL:
 
juan, where did you pull your numbers from, and why are you "betting" on them?

In other words, is this simply what you believe, or something more concrete?

I don't think it make sense to predict clock speeds based on a gut feeling :D
 
CJ seems to confirm the 930 GFlops rumours we heard from China. As I don't believe in only a low 1.3G shader clock, I'm believe more in a 1.9375 Ghz shader clock.

That doesn't mesh with the rumoured +50% per clock improvement though.....it's much more likely that those 930 Gflops are based on 3flops/shader @ 1300Mhz. Either way it sounds like it's going to kick ass all over the place.
 
Status
Not open for further replies.
Back
Top