NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
http://www.tgdaily.com/content/view/37611/140/

During a demonstration at Nvidia’s headquarters in Santa Clara, we got a glimpse of Adobe’s "Creative Suite Next" (or CS4), code-named “Stonehenge”, which adds GPU and physics support to its existing multi-core support.

So, what can you do with general-purpose GPU (GPGPU) acceleration in Photoshop? We saw the presenter playing with a 2 GB, 442 megapixel image like it was a 5 megapixel image on an 8-core Skulltrail system. Changes made through image zoom and through a new rotate canvas tool were applied almost instantly. Another impressive feature was the import of a 3D model into Photoshop, adding text and paint on a 3D surface and having that surface directly rendered with the 3D models' reflection map.

There was also a quick demo of a Photoshop 3D accelerated panorama, which is one of the most time-consuming tasks within Photoshop these days. The usability provided through the acceleration capabilities is enormous and we are sure that digital artists will appreciate the ability to work inside a spherical image and fix any artifacts on-the-fly.
 
The graph is clearly somehow f#cked up anyway, PS3s (per PS3) are getting out less than half of what ALL Radeons (X1k & HD) average per GPU based on their client stats, yet that graph says HD3870 is getting less than double performance compared to PS3
It looks like the points assigned by FAH, rather than sheer Flop numbers.

edit: Ignore, this post is now redundant. :oops:
 
CJ seems to confirm the 930 GFlops rumours we heard from China.
As I don't believe in only a low 1.3G shader clock, I'm believe more in a 1.9375 Ghz shader clock.
This translates (assuming 1:2.5 core/shader clock ratio as most G92 cards) into a suspect nice and rounded number: 775 Mhz core clock.

In short I'm betting at 775/1938 clocks for GTX 280.
For the memory I would bet 2200 Mhz.

I don't think they are unbelievable, in fact there is a G92 product shipping at these clocks : EVGA e-GeForce 9800 GTX SSC.

For the GTX 260, we don't have rumours but I would bet for clocks near or exactly like 9800 GTX 675/1680.

Anyone more want to bet on clock speeds?
And CJ, thanks for the confirmation on Gflops, and would you like to bet? :LOL:

So where are you puling these numbers from? And how are you doing the math on the GFlops? If they're going 3 flops/shader, then 240 shaders * 1300 Mhz clock * 3 flops/shader would be ~900-950 GFlops as CJ suggested
 
juan, where did you pull your numbers from, and why are you "betting" on them?

In other words, is this simply what you believe, or something more concrete?

I don't think it make sense to predict clock speeds based on a gut feeling :D

I just came from editor's day, and broke my NDA, that's where i got those clocks...... :LOL:

Now seriously, this is only what i believe more reasonably, based on the information we have (240SP, 930 Gflops).
There aren't a lot of possibilites for clock speeds, in fact now it only depends on the architecture -> flops/SP.
I expect at least clocks equal than G92 products, that discards 3 flops/shader.
What makes me believe i am right is that Nvidia usually clocks the core at a rounded number (multiple of 25), and 2 flops/SP and assuming G92 core/shader clock ratio gives exactly 775, a coincidence?
With 3 flops/clock it gives 522, not a nice number.


That doesn't mesh with the rumoured +50% per clock improvement though.....it's much more likely that those 930 Gflops are based on 3flops/shader @ 1300Mhz. Either way it sounds like it's going to kick ass all over the place.

I think these rumours were based on a misinterpretation of information.
Read the final slides please, if you don't have:
http://forum.donanimhaber.com/m_23372076/tm.htm
I think the +50% improvement is not per clock per shader, it's total vs a 9800 Gx2.

And why Nvidia would make more efficient shaders, but that clock a lot lower. As i showed the same gflops are attainable with 'old' shaders of G80 at G92 clock speeds.

Either way, it will rock...
 
Well its hard to say if they would indeed clock it higher than G92 only because G92 is a different architecture. The reason G92 was higher clocked than G80 was because the die shrink allowed it to be clocked higher and maintain and manageable TDP and all that.

GT200 is a bigger beast and is relatively different architecture wise, and as G80 showed, initial clocks were pretty low (relative to what G92 did). In fact, rev A2 of G80's (the initial GTS's and all GTX's) didn't clock much higher than 620ish core whereas A3 revision G80's (later GTS 640/320 and the Ultra's) could reach 690 core with decent airflow. With increased capacitors, heat, power usage, etc., Nvidia might go conservative with clocks with these early revisions.
 
Oh and you are absolutely right. The rumored +50% per clock improvement is indeed a misinterpretation:

The direct quote is 2nd gen. Unified shader architecture delivers %50 more performance over 1st generation through 240sp.

All it says is that there is 50% more performance over 1st generation shader architecture. That doesn't immediately mean 50% more performance shader per shader at the same clocks. In fact, it could just as well mean 50% more performance over the previous generation period. We don't know for a fact whether it is indeed 50% more per shader. I'm sure they made optimizations to the shader architecture and it may run more efficiently but that doesn't mean that those 240 shaders are in fact 50% faster per shader than the previous gen.
 
Jeez it hasnt even been a day and the Folding at Home shots are already all over the web.
 
Jeez it hasnt even been a day and the Folding at Home shots are already all over the web.

Well, your forum is even more popular now then you ever were before =P
--what is wrong with that? You are getting the respect and attention you deserve imo! You might also get a few new members, but that is not my fault!

And the one more thing i will say .. it really appears that GT280 is going to be a monster! i still say 'new architecture', but it isn't long now and we will all know for sure. Nvidia clearly intends these leaks; that says Confidence to me and i am dying to get my hands on CUDA and a pair of the new GTXes [when i can afford them]; damn .. and i need a new display. My vacation will include NVISION this year in San Jose for all 3 days and i am dying to see it and am looking for a small 25x16 for me.

it appears that double g80 GTX performance might be similar to the +50% improvement over the GX2 claims; usually it is best case.
 
Such low clocks and a TDP of ~250W, seems a bit strang.:???:

Or does the efficiency improvements increase consumption so much?
I had bet more on that these near 1 TFLOPs refer to MADD-FLOPs, what would match with the performance rumors.
 
Such low clocks and a TDP of ~250W, seems a bit strang.:???:

Or does the efficiency improvements increase consumption so much?
I had bet more on that these near 1 TFLOPs refer to MADD-FLOPs, what would match with the performance rumors.

Yeah so how could it achieve such a great result in 3D mark Vantage (7k in Extreme mode!) and run Crysis playable in 1920x1200 with open AA/AF? With only 1.3ghz Shader clock? There MUST be some major improvement in Shader efficiency.
 
If my sources are correct, it looks like Arun was spot on about that 50% improved efficiency.

Looks like NV found what they were missing. ;)
 
If my sources are correct, it looks like Arun was spot on about that 50% improved efficiency.

Looks like NV found what they were missing. ;)



Could you tell us potential performance increase between one with missing MUL disabled and the other one with extra MUL enabled ?


G70 to NV40 Again ?:oops:
 
At this stage I'm more wondering what the price will be on the GTX 280 and 260, as we are getting a good grasp on performance potential. We also already have estimates on the ATI 48xx prices. My guesstimate is somewhere around the low 500 USD range for the 280, although it could be higher given the size of the chip and its complexity.

I can't imagine it going over 600 though if it's going to be competitive or realistic. The 9800GX2 was a bit high in price for what it gave. 600 bucks would break that price ceiling imo (unless this card is so fast it's twice as fast as a 300 buck card :p).
 
Status
Not open for further replies.
Back
Top