Beyond3D's GT200 GPU and Architecture Analysis

Except i would hardly call the GTX 280 a "misstep".

Yes, it isn't. It just didn't really progress enough over the timeframe given. At least that's what the current numbers show.

But granted, I'm disappointed with Nehalem's single threaded improvements already :D so I could be too nit-picky.


(The Conroe quote was on a R600->RV770 basis)
 
G200 seems more robust and consistent than its predecessors.

There would have to be some point in time where the gap between sustained and peak performance got ridiculous, particularly in the face of more complex workloads.

G200's not being a grand slam might have more to do with just how successful G80 was in gathering up a whole lot of low-hanging performance fruit.
Consistency is nothing to sneeze at, if G200 can manage it.
Not as good as a GX2? Perhaps giving up 10% peak to avoid a rollercoaster framerate and waiting for SLI profiles is wortwhile. If shader complexity goes up at the rate it has, G80 was going to run out of gas eventually.
A lot of improvement looks like it's devoted making a design with less of a glass jaw in various areas.

RV770, for what little is truly known about it, is improving over a design with at least one significant glass jaw.

Conroe versus Prescott is something along the same lines.

I'm curious if a more thorough evaluation of G200 and RV770 will show where both designs match up on performance and flexibility.


The big low-hanging fruit I see isn't peak performance or raw FLOPs, but setup and the overhead in switching contexts between graphics mode and CUDA.
Context management and switching still looks like a sore point.

R600's lineage in some ways has elements that could lead to a much better handling of context handling, but CAL so far has high CPU overhead as well, so that's no guarantee.
 
AMD doesn't seem to think so based on its product positioning.

Because any higher and it wouldn't work well either.

With the GTX 260 entrenched at $400, the 4850s have to be $200 to get CF to be a real desirable solution.

Same goes for the 4870 and GTX 280 I guess.
 
Anyone that thinks GT200 / GTX 280 is another NV30 / GeForce FX disaster (not that anyone here does) is wrong IMO. I just don't see that.

GeForce FX was an entirely new generation GPU, compared to GF3/GF4. GT200 is seems halfway between what used to be called a refresh, and a new generation. It doesn't go beyond G80 as far as DX10 / D3D10 & Shader 4 generation or features. However unlike many refresh GPUs (TNT2, GF2gts, GF4ti, NV35, G70) that only increase resources modestly, GT200 adds ALOT more resources from what G80 already had. GT200 is doubled in complexity with twice the transistors, something that usually only happens with a generational leap (i.e GF3, GFFX, NV40, G80) plus GT200 tweaks the G80-based resources to perform better.

Does that sound right ?

I hope in time that GT200 clocks can go up significantly and drivers improve-- since a GDDR5 version/revision is unlikely anytime soon.

I also disagree with the person that (maybe in the other thread) that thought 55nm GT200b can't be close as 3 months I'm sure it is close. No doubt it'll arrive in this fall. Nvidia will probably then have a spring redesign of GT200, perhaps an Ultra, that is, if an Ultra version of GT200b doesn't happen.

Q: can GT200 be shrunk down to 45nm without losing its 512-bit bus ?

A potential 45nm "GT200c" for spring/summer 2009 could be what Nvidia needs to bridge the gap between GT200 and the next-gen DX11 GPU that's due in late 2009 or 2010.
 
Last edited by a moderator:
Because any higher and it wouldn't work well either.

With the GTX 260 entrenched at $400, the 4850s have to be $200 to get CF to be a real desirable solution.

Same goes for the 4870 and GTX 280 I guess.

I don't know about you, but seeing brand-new GTX 260's at similar price points to the 9800 GTX already is nothing short of amazing (and a mighty foe to the upcoming HD4870).

€ 284 in Germany.

Source: http://www.fudzilla.com/index.php?option=com_content&task=view&id=7956&Itemid=1
 
I want to see if either vendor can eliminate micro-stuttering before I'd go multi-GPU again. 4850 CF may be "faster" on paper, but deliver a less smooth experience in real world gameplay.
 
I want to see if either vendor can eliminate micro-stuttering before I'd go multi-GPU again. 4850 CF may be "faster" on paper, but deliver a less smooth experience in real world gameplay.
Like I said earlier, its YMMV issue. Just like some people are sensitive to screen-door effect, color wheel for projectors ..
 
The existence itself doesn't matter as much as how much it affects the person's perception.


For the color-wheel example, I can easily and always see the colors when I'm not facing the screen directly (even the edges of a large screen would bother me), but for many people, they don't even notice it.

Hence I can't stand DLP while for many, it's a great solution.
 
Very nice write-up.

I pulled the trigger on a eVGA board from Newegg for $660 (slight o/c board, 621Mhz) last night. For someone running a 30" display, the performance of this single GPU is a strong upgrade over a 8800 GTX. Combined with the fact that AMD has stated they're not currently interested in competing at this level, and that I'm not a fan of multi-GPU setups, and that a GTX 280 in Age of Conan at 26x16, high settings, bloom, AA/AF, is going to probably see my frame rate double and it was irresistable to me.
 
Consistency is nothing to sneeze at, if G200 can manage it.
Not as good as a GX2? Perhaps giving up 10% peak to avoid a rollercoaster framerate and waiting for SLI profiles is wortwhile. If shader complexity goes up at the rate it has, G80 was going to run out of gas eventually.

Very well said. Many many people just don't seem to have a good grasp of this concept, and why gaming should be much smoother on GTX 200 vs something like a 9800 GX2 or 8800GTX/9800GTX in SLI or any of the Crossfire systems.

I was pretty shocked by how consistent the framerate was on the GTX 280 and GTX 260 over a 10 minute span in various highly intensive games in the graphs provided at [H]OCP. Their unique style of reviewing is really starting to pay off for the consumer imo.

And I really must re-emphasize that NVIDIA marketing really missed the ball on this one in not putting any emphasis on smooth and consistent gameplay, and in creating such a huge price differential between two fairly similar cards.
 
Last edited by a moderator:
And I really must re-emphasize that NVIDIA marketing really missed the ball on this one in not putting any emphasis on smooth and consistent gameplay, and in creating such a huge price differential between two fairly similar cards.

Nvidia PR press release: "SLI iz teh zukxxxorz!"

I dont think so.
 
Nvidia PR press release: "SLI iz teh zukxxxorz!"

I dont think so.

I know what you mean, and it probably contributed to a lack of response from NVIDIA marketing to current testing procedures by reviewers, but at some point they have to come to grips with the fact that multi-gpu systems tend to have very unstable framerates and often require driver updates for games to have good scaling in multi-gpu mode. For them to say nothing and essentially pretend that the issue doesn't exist or that the issue is not significant is ultimately a losing strategy in my opinion, especially when your latest and greatest cards are the essence of smooth gameplay and stable framerates.

On the other hand, it appears that AMD/ATI is at least trying to fix issues such as micro-stuttering and bottlenecks with multi-gpu systems (albeit with no proof of concept yet). But even so, I believe that part of the reason R700 is delayed another two months is because that time is sorely needed for ATI to work on the drivers for their multi-gpu card(s). By that time, NVIDIA may have a single GPU solution that has even higher performance than GTX 280.
 
Last edited by a moderator:
Back
Top