NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
I suppose that GT200 actually have 16X16 configuration at least, both GT280 and GT260 are disabled version of GT200.

Why? I don't find any reason for doing this. In my opinion, Nvidia will launch a GTX290@55nm with higher clock in Q4. If GTX280 have 15 cluster on 16 , it couldn't have 80TF and 40 TAU...

From what I've seen here, this is my understanding of rough price points and performance, in order from lowest in performance to highest:

These are disregarding any 512MB/1GB variations... just the chip itself.

1) ATI HD4850 at ~$250
2) ATI HD4870 at ~$350
3) Nvidia GTX260 at ~$450
4) Nvidia GTX280 at ~$550+
5) ATI HD4850x2 at ~$500
6) ATI HD4870x2 at ~$550

7) Nvidia GTX260x2 ??? at ~$550+ (assuming 65nm, if possible)
8) Nvidia GTX280x2 ??? at ~$600+ (assuming 65nm, if possible)

We see Nvidia coming in the lead for the single cards, but this time (as different with the 38xx series) the x2 takes the lead significantly over Nvidia's single offerings.

What's particularly interesting is the CrossfireX ability of the new 48xx series. We'll be seeing true memory sharing and chip integration, which means an actual x2 card will look as if its just 1x. CrossfireX will still show 2 I believe, but the memory may still be shared? AND, its cheaper than the GTX280.

Make adjustments to my assumptions, as they are based off of incomplete reports from around the web.

Even at 55nm will be very hard putting 2xGT200 in the same card: gourgous TDP!
If they ever relase a GT200 based multi-gpu card they will wait for 45nm...
 
I doubt they sat on it, they were probably fixing it ... and making changes while fixing it would take a lot of balls.

Another possibility is that they've been working for the last 6 months on porting PhysX to CUDA. Perhaps it was going to be delayed while they were fixing it, and with the PhysX acquisition, they decided to delay it just a little big longer, especially with G8x/G9x inventory to clean, and no real competition.

NVidia may see CUDA as a "game changer", that they can create a new platform around their GPU architecture and get devs onboard. With no DirectGPGPU or DirectPhysics API in sight, there is a real hole for someone to fill with a "Glide for Physics"
 
Yes I felt that 9800GX2 was created more to counter R680 than anything else. Sort of to show everyone that "hey, we can do it too" kind of mantra, as well as to make sure they kept the crown across the board since there are certain games where 3870X2 benches better than any single GPU config at the time of its release.

This is the best line of reasoning i have seen yet here. Nvidia wants the high end and clearly GT200 will give it to them in spades. Also a GTx2 is almost impossible now, and likely unnecessary - but to say we will not see it with a refresh is irresponsible, imo. I think Nvidia must counter AMD's continually improving CrossfireX which will certainly scale to X4 - eventually! Then we will see a GTx2; not now!

Also, it appears that Nvidia thinks yields are going to be great and will use this Tesla architecture as basis for their GPUs for the next 2 years. They know how to make yields work for them with their lesser chips and i think we will see g92 variants phased out quickly.

Of course, CUDA is their game changer and Nvidia's not so secret weapon.

EDIT:

one of the guys from HardOCP claims to have emailed Charley about who wrote this:http://www.theinquirer.net/gb/inquirer/news/2008/05/24/gtx260-280-revealed

Evidently charley responded back and seems to have seen or heard about numbers on both sides and to quote,
"Two 770s on a card will kill the 280, and NV can't do 2 x 280s for power reasons, and would be hard pressed to do 2 x 260s for the same reason."

Finally, this IS worth looking at:

http://www.tgdaily.com/content/view/37611/140/

Photoshop to get GPU and physics acceleration

During a demonstration at Nvidia’s headquarters in Santa Clara, we got a glimpse of Adobe’s "Creative Suite Next" (or CS4), code-named “Stonehenge”, which adds GPU and physics support to its existing multi-core support.

So, what can you do with general-purpose GPU (GPGPU) acceleration in Photoshop? We saw the presenter playing with a 2 GB, 442 megapixel image like it was a 5 megapixel image on an 8-core Skulltrail system. Changes made through image zoom and through a new rotate canvas tool were applied almost instantly. Another impressive feature was the import of a 3D model into Photoshop, adding text and paint on a 3D surface and having that surface directly rendered with the 3D models' reflection map.
So CUDA bites Intel's butt, to be crude. Let's see larrabee match it; it looks like CUDA is what, 10x faster?
:oops:
 
Last edited by a moderator:
Of course, CUDA is their game changer and Nvidia's not so secret weapon.

EDIT:

one of the guys from HardOCP claims to have emailed Charley about who wrote this:http://www.theinquirer.net/gb/inquirer/news/2008/05/24/gtx260-280-revealed

Evidently charley responded back and seems to have seen or heard about numbers on both sides and to quote,

Interesting but guys from PCInlife seem to have info from reliable source and have said even 2XRv770 won`t be able to beat GT200. So who is right?:)

Moreover if is true that Rv770 pefrormance is about 20-30% higher than GF9800GTX performance and GT200 is said to score 7k in 3D Mark Vantage Extreme mode then there is no possibility that even Dual Rv770 will beat GT200.
 
Charlie also wrote how R600 was going to be the "preeminent enthusiast GPU of 2007" and lamented about how much trouble Intel was going to be without the access to that wonderchip.

Charlie doesn't know ****.
 
...

Finally, this IS worth looking at:

http://www.tgdaily.com/content/view/37611/140/


So CUDA bites Intel's butt, to be crude. Let's see larrabee match it; it looks like CUDA is what, 10x faster?
:oops:

"Oct. 1" (aka, "Just make something up")
Gizmodo is repeating info found on a site called TG Daily, stating that "Photoshop CS4" (a term that I've never heard anyone from Adobe use publicly) "is expected to be released on October 1." Uhh... expected by whom? And based on what?

I didn't say anything about schedule. In fact, I never said that any of this stuff is promised to go into any particular version of Photoshop. Rather, as with previous installments, it's a technology demonstration of some things we've got cooking--nothing more.

Doesn't matter, though: Someone pulled a date apparently out of thin air, and now everyone who can copy & paste is dutifully repeating it. The fish story grows with the telling, too. In addition to repeating the date, Electronista is inventing new details (e.g. "CS3 has already had limited support for graphics processing units (GPUs) for certain filters"; sorry, no; "An upcoming wave of video cards with special physics processing will also help, Adobe explains"; nope, didn't say that; and more). Where do people get this stuff? It's particularly annoying to see made-up info presented as a response from Adobe--to questions that were never asked. (Contacting Adobe PR, or me directly, to confirm some detail isn't exactly tough.)


Technology sneak: Photoshop, AE, Flash
I show off some new performance tuning in Photoshop by playing with a 650 megapixel image on a Mac Pro. It's too bad that the low frame rate of recording hides the fluidity of panning, zooming, and rotating via OpenGL hardware acceleration. I also demonstrate automated merging of images to extend depth of field, as well as a 360-degree panorama mapped onto an interactive 3D sphere on which I can paint directly. (Painting directly onto 3D models--mmm, yes.) Steve demos Adobe's new "Thermo" RIA design tool while Karl shows off inverse kinematics in Flash and more.

Pixel Bender now showing in Flash Player

Earlier today I found myself over at NVIDIA, demoing some of the new OpenGL-accelerated Photoshop technology we've got cooking in the labs. The latest GPUs are just crazy-fast, and it's a great pleasure to see a 2-gigabyte, 442-Megapixel Photoshop file gliding around like buttah*.
 
his contempt for NDAs
NDA'd early information and review samples are used to put websites on a leash and it works. There is nothing nefarious about why manufacturers do it, but it's not in the best interest of consumers. We need a couple of websites with contempt for NDAs they didn't sign to keep things fresh (and a little more honest).
 
NDA'd early information and review samples are used to put websites on a leash and it works. There is nothing nefarious about why manufacturers do it, but it's not in the best interest of consumers. We need a couple of websites with contempt for NDAs they didn't sign to keep things fresh (and a little more honest).

A "couple" ?
Don't we have enough obscure Chinese hardware news and rumor mill sites already ? ;)
 
Interesting but guys from PCInlife seem to have info from reliable source and have said even 2XRv770 won`t be able to beat GT200. So who is right?:)

Moreover if is true that Rv770 pefrormance is about 20-30% higher than GF9800GTX performance and GT200 is said to score 7k in 3D Mark Vantage Extreme mode then there is no possibility that even Dual Rv770 will beat GT200.


The main reason why those people in China websites can deliver such reliable information is mainly because that the productions for high-end graphic boards are located in China. Advantage of RV770 may not be viable, if die size of G92b were to be as same as that of RV770.
 
AFAIK nobody has had reliable proof they've tested RV770 yet and there havent even been blurred pics of the card yet so it's still wait and see for me
 


See! you have to realize that this is what i love about you guys! i can now post normal things from the 'net, and then get the real inside view - the "why" - without really breaking your own NDAs [i hope]. My function would now be that of a reporter. hopefully to ask the right questions and to get a more complete picture. Thank-you for sharing! i do have a better picture now.

NDA'd early information and review samples are used to put websites on a leash and it works. There is nothing nefarious about why manufacturers do it, but it's not in the best interest of consumers. We need a couple of websites with contempt for NDAs they didn't sign to keep things fresh (and a little more honest).

i think you misunderstand me. i realize the reasons for NDA and i abide by them myself as much as possible. However, i think i was pointing out that Charley was always contemptuous of them and would post info on his site - long ago - that raised the ire of Nvidia - and i also believe, years ago, they started feeding him false info just to discredit them. And the war escalated till Charley is lop-sided on his site; all AMD. And they don't treat him so well either.

The moral of this story [imo] is, don't piss Nvidia [or anyone] off if you don't want to look bad later. Charley has no in with Nvidia - because he disrespected them first by breaking NDA [or so i remember; perhaps i am confused with someone else]
 
Interesting but guys from PCInlife seem to have info from reliable source and have said even 2XRv770 won`t be able to beat GT200. So who is right?:)

Moreover if is true that Rv770 pefrormance is about 20-30% higher than GF9800GTX performance and GT200 is said to score 7k in 3D Mark Vantage Extreme mode then there is no possibility that even Dual Rv770 will beat GT200.


I don't know we'll have to wait, but going by the specs, I think R700 / dual-RV770 (the X2) will beat GT200 somewhat. but a GT200 Ultra will make things even or swing it in favor of Nvidia.

I don't think RV770 or R700 will be an AMD/ATI miracle, but I expect a solid GPU that builds on the lessons of R600.
 
Why? I don't find any reason for doing this. In my opinion, Nvidia will launch a GTX290@55nm with higher clock in Q4. If GTX280 have 15 cluster on 16 , it couldn't have 80TF and 40 TAU...



Even at 55nm will be very hard putting 2xGT200 in the same card: gourgous TDP!
If they ever relase a GT200 based multi-gpu card they will wait for 45nm...


I agree now, and I think GT200-GX2 will be the 2009 refresh.

I'd expect each GPU to be on 45nm and more like a 260 rather than a 280 though.

384 SP - 2x 448-bit bus - 48-54 ROPs.

Or they could switch to 2x 256-bit bus and go with GDDR5.

This is how I see things being played out

mid 08: GT200
Q4 08 / Q1 09: GT200 Ultra
mid-late 09: - GT200 GX2 type card
Q1 or Q2 2010: next-gen all new Nvidia DX11 architecture.
 
They could always just make the next GT200 Ultra just a revision version. After all, G80 GTS and GTX were initially launched at A2 revision but the 8800 Ultra and later G80 GTS's were A3 revisions of the G80 core (and could clock to upper 600's core on air pretty reliably, whereas A2's were around 630 limit).
 
They could always just make the next GT200 Ultra just a revision version. After all, G80 GTS and GTX were initially launched at A2 revision but the 8800 Ultra and later G80 GTS's were A3 revisions of the G80 core (and could clock to upper 600's core on air pretty reliably, whereas A2's were around 630 limit).


That's what I meant, GT200 Ultra just being a silicon revision (not a new GPU) like the A2 to A3 revision of G80.

A GT200 GX2, if there is one in 2009, would have to be a reworked GPU like G92 was from G80.
 
Not really... :LOL:

Built in PPU-HW, misunderstood "memory 999MHz, 896MHz GDDR3" from INQ interpreted as 999MHz shader and 896MHz memory...
 
I agree now, and I think GT200-GX2 will be the 2009 refresh.

I'd expect each GPU to be on 45nm and more like a 260 rather than a 280 though.

384 SP - 2x 448-bit bus - 48-54 ROPs.

Or they could switch to 2x 256-bit bus and go with GDDR5.

This is how I see things being played out

mid 08: GT200
Q4 08 / Q1 09: GT200 Ultra
mid-late 09: - GT200 GX2 type card
Q1 or Q2 2010: next-gen all new Nvidia DX11 architecture.

Hold on! .. this idea is becoming mainstream? You guys were laughing at me - last week - when i suggested a GTx2 with the refresh and that Tesla would be around for 2 years - and Next Gen to coincide with DX11 in 2010!
:oops:
 
I'm still laughing at you. Except now I'm laughing at Megadrive too! :p (no offense intended... :))

Anyway, 384 SPs/96 TMUs/32 ROPs/256-bit GDDR5 would seem like a reasonable config on 40nm (heck, it'd even be reasonable on 55nm!); however, I suspect that'd be too much for a GX2, but not really the maximum the process could sustain either. That'd be relatively logical if they'd expect their DX11 chip to come 6 months later though. We'll see how it goes...
 
Status
Not open for further replies.
Back
Top