Beyond3D's GT200 GPU and Architecture Analysis

Arun

Unknown.
Moderator
Legend
It's finally there: GT200 is the chip NVIDIA is pitting to dethrone its former 18 months old champion, the G80. We won't look into real-world performance just yet, but we've got our usual brand of architecture analysis up right away with more coming in the next few days and weeks.

Article Link

Tentative Schedule
17th or 18th of June: Tesla/CUDA Article
20th or 23th of June: GT200/G80 Architecture Deep-Dive
25th and 30th of June: Unrelated Computation Article, Part 1 & 2
+ Fit in RV770 somewhere in there... ;)

Review thread: http://forum.beyond3d.com/showthread.php?t=48566
 
Oh yeah, that's the stuff! Thanks guys (or just Rys? :D)!
 
Last edited by a moderator:
The original article is all Rys', but I made a whole bunch of minor changes/improvements/corrections/etc. and insisted for several extra details to be chipped in. Either way, I think we're both relatively happy about the article so enjoy! :) (especially given the laughably little amount of direct access we had to the actual chip)
 
Yes, I am enjoying it.

Especially at the end of page 6. Pun or typo? :LOL:

Well done, all of you. :)

Edit: "Satan Clara"? Heh, on page 8. :)
 
Last edited by a moderator:
The wafer shot on page one hints at ~90-95 candidate dice per 300mm start. That's not a lot. We'd call it the Kim Kardashian's ass of the graphics world, but that wouldn't be appropriate for such a sober laugh-free publication as ours.
Oh god, cannot control .. laughter .. anymore.
Great job.
 
Bludd: Pretty sure those two are puns, while nicolasb's 'deviance' might be a mistake but I'm not completely sure. Either way, guess we'll keep it since you like it that much... ;)
Also, uhhh, what about also having an architecture discussion? :!: :)
 
Arun, yes, but I just wanted to highlight them in case they weren't.
I'll leave the arch. discussion to you guys. I have to write about fault reporting in business-critical systems, not reading about GPUs. Argh! :D
 
Thanks, guys.

I had a read at Anandtech, and judging from their benches, the GTX 280 is worse off then the 9800 GX2.
 
Are you guys getting getting double precision math out of CUDA at the moment?

PS. oops misunderstood the numbers.
 
Last edited by a moderator:
The wafer shot on page one hints at ~90-95 candidate dice per 300mm start. That's not a lot. We'd call it the Kim Kardashian's ass of the graphics world, but that wouldn't be appropriate for such a sober laugh-free publication as ours.
Oh god, cannot control .. laughter .. anymore.
Great job.
Interesting to see that Tech Report used this analogy too!
Whatever the case, this chip is big—like "after the first wafer was fabbed, the tide came in two minutes late" big. It's the Kim Kardashian's butt of the GPU world.
Source
 
Full speed DP? [...]
PS. are you guys getting getting double precision math out of CUDA at the moment?
No, no, it's 1/8th speed DP. Is there any place where that's not clear? What's full-speed are denormals (still trying to get a bit more details about potential catches there if any). As I said, our access to the chip was... very limited though, so we haven't had the opportunity to test any of that, at least not yet. As for that analogy, it's not like Rys and Scott never talked... ;)
 
I'm not a very technical person, but what specifically is preventing FP32 operations from being performed on the FP64 units simultaneously with the rest of the FP32 units? Also you briefly mention that the ROPs are what prevent D3D 10.1 support, what is it that causes that limitation?
 
Just gone done reading the review at Anandtech.

Given that we know the 9800GX2 is faster than GTX 280 overall, one questions why Nvidia hyped the GT200 series to high heaven and then slap a monster price tag on top of it.
 
I really really miss Beyond3D reviews. Is there any chance of it coming back? Even imagine DaveB getting canned during those layoff rounds. Shouldn't feel this way but can't help it sometimes. :(
 
Why does the TechReport say this?

"Although the GT200 sticks with tried-and-true GDDR3 memory, it's capable of supporting GDDR4 memory types, as well—not that it may ever be necessary. The GTX 280's whopping 142 GB/s of bandwidth outdoes anything we've seen to date, even the dual-GPU cards."

Yet your own article mentions it only supports GDDR3
 
Back
Top