G70, G71, R520, R580 die size discrepancy

Xbot360 said:
More good news for Nvidia (which I hate posting) there's links to a overclocked 3dmark06 score of 6700.

They overclocked it to 705mhz. If that's easy to do..wow.
What´s so "wow" about that ?
 
Well I mean, if it's readily overclockable, that's good for Nvidia right?

Anyways wrong thread, my bad, but you cannot edit your posts..
 
Chalnoth said:
If you're talking about supersampling in conjunction with HDR, then that is entirely up to the developer...

No, I'm talking about whether SSAA can have RG sampling or not. Mintmaster said it wasn't possible but the NV PR I linked to earlier mentions RG but not specific to either MSAA or SSAA... sorry for the off topic but I wanted that clarified.
 
Jaws said:
No, I'm talking about whether SSAA can have RG sampling or not. Mintmaster said it wasn't possible but the NV PR I linked to earlier mentions RG but not specific to either MSAA or SSAA... sorry for the off topic but I wanted that clarified.
There's no API interface for supersampling AA with HDR.

When you're talking about selective supersampling, which is accessed by requesting MSAA while transparency AA is enabled, then the supersampling that is performed is the same sample pattern as the MSAA, i.e., rotated grid.

But when you're attempting to do FSAA on an FP16 render target on nVidia hardware, your only recourse is to draw to a larger buffer than the final output (or multiple separate buffers), and recombine these manually-created samples when inputting this FP16 texture (or textures) for the tonemapping pass. So this will be ordered grid in the simplest case (simple downsampling of a larger buffer), or rotated/sparse grid if the developer so desires (when rendering to multiple buffers and blending between them, likely at a performance cost, as geometry would have to be resubmitted, and texture caches would be less effective).
 
Jaws said:
No, I'm talking about whether SSAA can have RG sampling or not. Mintmaster said it wasn't possible but the NV PR I linked to earlier mentions RG but not specific to either MSAA or SSAA... sorry for the off topic but I wanted that clarified.
Chalnoth said it twice already, but I'll try to simplify it for you a bit. When writing a 3D game with AA, developers can either specify the multisample amount, or render to a higher resolution and shrink it themselves.

NVidia has never shown the ability to use anything but a regular grid for supersampling via their control panel, whether we are looking at their 4xS, 8xS, or 16xS AA modes. If that's not confirmation for you, then I don't know what is. 2x SLI AA may be the best thing you can do, assuming they can figure out how to support it with HDR rendering and all the off-screen buffers.
 
^^ Guys thanks for the explanations but there seems to be some wires crossed. This is what I want clarified from what Mintmaster posted earlier,

Mintmaster said:
......
More importantly, the lack of rotated grid will make SSAA's visual improvement rather minimal (except in cases of shader antialiasing),...

Are you saying SSAA and RG is NOT possible with HDR? Because from Chalnoths explanation it is but upto the developer.
 
Jaws said:
Are you saying SSAA and RG is NOT possible with HDR? Because from Chalnoths explanation it is but upto the developer.
Any pattern is possible with supersampling if you're willing to render the scene n times with jittered geometry. That is possible on any card with render to texture. But "real hardware-supported RGSSAA" wouldn't need multiple geometry passes or any effort from the developers.
 
Xmas said:
Any pattern is possible with supersampling if you're willing to render the scene n times with jittered geometry. That is possible on any card with render to texture. But "real hardware-supported RGSSAA" wouldn't need multiple geometry passes or any effort from the developers.

Cheers... it's all crystal clear now!
 
Well, since that chip actually fits the board onto which it was placed, it's at least a little more likely that it's a real G71 than that DailyTech piece :)
 
Jawed said:
Do you think that ATI and NVidia are only now just implementing fine-grained clock-gating?

FGCG was initially introduced for low(er) power applications. Given the initial problems with getting clock skew right and the corresponding reduction in maximal clock speed, it's a possibility that they postponed using this technique for the high-end products.

Jawed said:
I was under the impression that they've both been doing this kind of thing for a while now, to achieve low-power in laptop/mobile (and using that tech in desktop, too).

Yes, that seems very likely. But note that laptop/mobile components don't have to run at the breakneck speeds of desktop parts, so you have more margin to introduce new techniques.

Jawed said:
Perhaps what you're saying is that they're moving from coarse-grained to fine-grained. Or at least that NVidia may well have done so with G71.

All I'm saying that this may explain the missing 25 mm2 that everybody loves to speculate about. ;)

BTW, even if they only now start using FGCG, it doesn't mean they'll abandon the coarse grained gating: that's still an extra bonus for when a block is really completely idle.

Jawed said:
I'm just trying to get a feel for timescales here. If we assume that they're lagging behind Intel/AMD, by how much?

I didn't really say that they're lagging: I don't even know if Intel and AMD are using fine grained clock gating for their maximum speed processors. (Although I can't imagine that they're not using it for their mobile chips.)
It's a totally different ballgame anyway: Intel processors still use a lot of custom logic, where they can use every dirty trick in the book. As far as I know, ATI en Nvidia use standard cell technologies, where the amount of games to play is much more limited.
 
kemosabe said:
Is it just my sore eyes or is this G71 die shot not far larger than the Dailytech one? Atomt, care to do the honours? ;)


This one is larger than the one on dailytech. The dailytech one also have 2 rows of unused
solder pad all round the package. The die is larger and the package (substrate) is larger as well. This is identical to the G71 in the nvnews.net down to the chipcaps. Major difference is laser marking G71-U-N-A2 vs G71-GT-N-A2

One is the Ultra (GTX) and the other GT.
 
Dont know if anybody has mentioned the die size Anand is quoting but here is what he said.

196mm^2 for the G71
353mm^2 for the R580

125mm^2 for the G73
 
atomt said:
I would like to accept the bet, but I try not to make senior members look .....

Reality is that I don't have a 7800GT for a more accurate verification, and all the great minds here prefer not to use a ruler to make some measurements. I still think it is > 200mm^2.

Thursday is just 2 days away and by month end, we will know whether G73 is 125mm^2 or 150mm^2.

No problem for junior member to eat crow. I just scratch my balls wondering where all the scaling went bad.

http://www.beyond3d.com/misc/chipcomp/?view=chipdetails&id=112&orderby=release_date&order=Order&cname=

:cool:
 
Note, the larger die images are using the same rules for the later scans (which includes all the R5xx and new 90nm G7x series), so they should be comparable.
 
Did you get 287 right or is it 278million transistors, wonder if you accidently swapped that numbers?

Edit, when you got the time can you uppdate the NV30 era cards with diesize also..
 
Last edited by a moderator:
geo said:

I have not seen any accurate measurements on G71. It still look bigger than 196mm^2.
Anyway, the one that really mattered to me was G73 where it would have been really obvious whether it was 150 or 125. I got screwed by another fake die at dailytech. The so-called G73 that have been floating around vr-zone and others is another fake. G73 is now smaller than GDDR3 memory package.

So I was wrong. It is OK to measure my wang with Nvidia ruler because it would not measure
smaller than actual. Does ATI have the other kind of ruler?
 
Back
Top