NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
Maybe. Maybe not. A 55nm G92 should be able to (in theory) clock both its master and shader cores quite a bit higher.
55nm is an optical shrink of 65nm. When has an optical process shrink ever offered any significant speed boost? 130->110? 90->80? Not that I can recall. Now, NVIDIA may choose to add some performance features to G92b since it will be a smaller chip, but then it's not just a shrink anymore.

-FUDie
 
55nm is an optical shrink of 65nm. When has an optical process shrink ever offered any significant speed boost? 130->110? 90->80? Not that I can recall. Now, NVIDIA may choose to add some performance features to G92b since it will be a smaller chip, but then it's not just a shrink anymore.
They shouldn't do a die shrink unless its to 45nm otherwise its a waste of effort
55nm your right it wouldn't yield good enough results but 45nm could
 
It would yield more chips per wafer, would it not?
True but im talking about performance gains it wouldn't be too noticeable you might run a tad cooler but thats about it you would have to ad features on top for the 55nm to be worthwhile (performance-wise)
 
True but im talking about performance gains it wouldn't be too noticeable you might run a tad cooler but thats about it you would have to ad features on top for the 55nm to be worthwhile (performance-wise)
Moving from 55nm to 65nm is a cost-saving move. It would allow NVIDIA to either make more money selling G92b chips (compared to G92) or sell them for a lower price, or possibly both.

-FUDie
 
http://www.pcper.com/comments.php?nid=5679

Yes, it's finally coming - the NVIDIA GPU client we have all wanted since first seeing and tasting the power of the ATI GPU Folding@Home client.

Vijay Pande, the man behind the Folding@Home project, was on hand to demonstrate the first showing of the NVIDIA GPU client.

I don't think I'll be spoiling anything by saying the new GPU was incredibly fast and the upcoming GPU will be faster than any Folding client today including the PS3; you will be impressed.
 
It looks like PC Per got a mole into the editor's day and posted pics from it (NDA? what's that?).

NV Acquires RayScale
F@H client

Anyone know how "averaging ~100GFLOPS sustained" compares for F@H? And it looks like Geo can finally change his sig. :rolleyes:

Not completely sure but know that C2D does ~25GFlops, C2Q does ~50GFlops so ~100 would be dual quads. I have no idea about "sustained" or "peak/theoretical" though.
 
Anyone know how "averaging ~100GFLOPS sustained" compares for F@H?
All AMD cards currently participating in beta test average about 70 GFLOPS. As the GPU client is still in beta testing, the current small work units don't load GPUs to the max. So ~100GFLOPS is about as much as AMD 3870 would produce, IMHO.

Also really interesting part will be comparing CUDA to CAL - GPU folders are complaining that CAL itself loads CPU so much, that < 3 GHz CPU can't keep up with the dataflow and becomes bottleneck by itself. Part of the problem is, again, the small nature of current computations. Also AMD has acknowledged they are working on minimising the CPU overhead of CAL.

On a sidenote - AMD had F@H running on their cards for over a year. Despite that, they didn't really advertise the fact. Maybe they did bring it up when working directly with clients, but mainstream tech sites didn't even mention it in their coverage of latest launches from AMD. But it seems the PR-department of NVidia has yet again proved their worth.
 
Last edited by a moderator:
folding4im7.jpg


folding3wj7.jpg


It's a Geforce GTX 280 probably, otherwise they wouldn't be showing off old hardware at Nvidia Editor's Day event.
 
Rename pictures in arcticle to 3 and 4 and see... :D

cool. how the hell did you think of doing that?

Anyway, can we gleen anything from the other pics, that might give us more info?

EDIT: I thought it might be interesting to see if I could clean up the image and make the information clearer.

Click the image it see the enlarged version. Im sure it says Gerforce GTX 280

 
Last edited by a moderator:
Moving from 55nm to 65nm is a cost-saving move. It would allow NVIDIA to either make more money selling G92b chips (compared to G92) or sell them for a lower price, or possibly both.
I agree it would be a move totally based on price.
As for g92b any idea when its released? , im wondering how soon after g92b is released will gt200 be released.
t's a Geforce GTX 280 probably, otherwise they wouldn't be showing off old hardware at Nvidia Editor's Day event.
Why would they compare it to a cpu,ps3, and an hd3870 no its just their current technology being compared to the rest of the market sort of a im better then all of the rest rant (maybe im reading it wrong) if it was the gt200 im thinking they would compare it to current g80-g92 as well to make it look even more oustanding. Im not ruling out the second pic though
 
Last edited by a moderator:
55nm is an optical shrink of 65nm. When has an optical process shrink ever offered any significant speed boost? 130->110? 90->80? Not that I can recall. Now, NVIDIA may choose to add some performance features to G92b since it will be a smaller chip, but then it's not just a shrink anymore.

-FUDie

G71 was on 90nm compared to G70@110nm, which obviously doesn't belong to the above category. In any case if they'd want to further increase frequencies on G92b compared to G92 I suspect it would also cost some extra transistors since the higher ALU frequencies don't come for free.

Theoretically a G92b could that way reach something like 800/2000 and coupled with GDDR5 it could stand against RV770 more than just decently. I have the feeling though that they might introduce some half-assed framebuffer variant out of GT200 (a la 8800GTS/320) with 448MB ram on board to cover that spot, which frankly doesn't sound ideal either.
 
All AMD cards currently participating in beta test average about 70 GFLOPS. As the GPU client is still in beta testing, the current small work units don't load GPUs to the max. So ~100GFLOPS is about as much as AMD 3870 would produce, IMHO.

Also really interesting part will be comparing CUDA to CAL - GPU folders are complaining that CAL itself loads CPU so much, that < 3 GHz CPU can't keep up with the dataflow and becomes bottleneck by itself. Part of the problem is, again, the small nature of current computations. Also AMD has acknowledged they are working on minimising the CPU overhead of CAL.

On a sidenote - AMD had F@H running on their cards for over a year. Despite that, they didn't really advertise the fact. Maybe they did bring it up when working directly with clients, but mainstream tech sites didn't even mention it in their coverage of latest launches from AMD. But it seems the PR-department of NVidia has yet again proved their worth.

100GFLOPS @ 3870? I recall Stanford guys saying how they get close to theoretical max out of the HD Radeons, which should translate close to 500GFLOPS, no?

edit:
The graph is clearly somehow f#cked up anyway, PS3s (per PS3) are getting out less than half of what ALL Radeons (X1k & HD) average per GPU based on their client stats, yet that graph says HD3870 is getting less than double performance compared to PS3
 
Status
Not open for further replies.
Back
Top