The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
I think he might mean that there were two different chips that powered the GeForce 6800 (lovingly called "the vanilla" back then) over its lifespan. But then that's a competitive point, and I don't know that chavvdarrr gives a fig. :smile:

Edit: How did I get to be translator in the middle of this convo anyway? Oh, right, my incurable busy-body gene. :LOL:
 
I think he might mean that there were two different chips that powered the GeForce 6800 (lovingly called "the vanilla" back then) over its lifespan. But then that's a competitive point, and I don't know that chavvdarrr gives a fig. :smile:

Edit: How did I get to be translator in the middle of this convo anyway? Oh, right, my incurable busy-body gene. :LOL:

Actually, there were 4:

NV40/45 (IBM)
NV41 (IBM)
NV42 (TSMC)
NV48 (TSMC)
 
  • Like
Reactions: Geo
Apologies for OT, but did NV48 ever actually exist?

edit: Twink beat me to it.

NV48 was supposed to be a 6800 Ultra Ultra (6900?). I think 512mb and 110nm iirc. It was supposed to be a quick refresh from TSMC because 'the small n' was pissed with IBM's performance of NV4x.
 
Last edited by a moderator:
I would like to point out that Nivida enjoyed a massive die size advantage this last generation, an advantage that seems to be gone in the dx10 generation. So their great margins will shrink.
 
Yeah, rv610 and Rv630. IMHO, those specs sound more-so like one would expect from Rv610, but who knows...If rv630 is proportioned as crappily spec-wise as rv530 was to R520/R580, it may just be possible. :???:


I would like to point out that Nivida enjoyed a massive die size advantage this last generation, an advantage that seems to be gone in the dx10 generation. So their great margins will shrink.

How so? Nvidia will likely release an 80nm refresh, and mid-range parts at or before ATi gets one 80nm part out the door. Granted, ATi will likely have 65nm parts out on the cashcow low-end first, but who's to say Nvidia won't hit 65nm with a future G8x or G90 before we see 'R680' on 65nm? Even more-so, who's to say we won't see 65nm or even 55nm 8600 and 8300 parts like we've seen with G73 (only earlier since 65nm will be ready earlier in the product cycle than 80nm was)?

If you're saying a shrunken G80 will be larger than a R600, or a 65nm G8x larger than a 65nm R6xx; who's to say? G70 became a WHOLE lot smaller with the 90nm shrink, and even G73 went from 125 to 100mm² with a 90-80nm shrink. Say at least the later part was true, a 80nm G80 could be a similar size if not smaller than R600.
 
Last edited by a moderator:
H1 07
R600 Q1?
Rv610 (bum) Q2?
Rv630 (shaka) Q2?

H2 07
Rv660 (laka) Q3?
Rv670 (cheeseburger? No idea...but it's on the roadmap) Q3/Q4?
R6xx (R680?) Q3/Q4?
 
You're forgetting that NVidia trimmed-down the pixel shader pipeline at the same time - considerably.

Jawed
Huh? G71 dropped ~20M transistors in the translation to 90nm, but nothing was lost caps or perf wise per clock, compared to G70. It's still 6-quads dual-MADD PS, 8 VS, 16 ROPs, 256-bit, etc, as G70 is.

Or am I missing something?
 
Huh? G71 dropped ~20M transistors in the translation to 90nm, but nothing was lost caps or perf wise per clock, compared to G70. It's still 6-quads dual-MADD PS, 8 VS, 16 ROPs, 256-bit, etc, as G70 is.

Or am I missing something?
I thought it was 25M transistors, but it was a cut in transistors that I was referring to.

Jawed
 
Yeah, I under the impression it was the same, minus 25M trannies (still not clear how they cut those).

Still, regardless of 302M with G70 and 278M in G71, it still went from 334mm² to 196mm². If Nvidia can shrink that gpu by over 40% from 110nm -> 90nm in whatever fashion they did so, I have little doubt they can't pull a similar miracle going to 65nm, if there isn't a slight surprise with an 80nm part...I wouldn't be surprised if that was under 400mm² and a 65nm part was under 250mm², if not smaller. If released before their ATi counterparts, and perform similarly or slightly worse while perhaps being smaller dice, that's not loosing margin because of the die being too big, it's keeping the same position they had last time...That being having a close-enough performing product at a way cheaper cost allowing them to cut prices if need-be. Until we see R600 vs G80(/G81?), Rv610 v G84, and Rv630 v G86, it's too early to call. Even though nvidia's mid-range may be 80nm vs ATi's 65nm, it'll arrive sooner, which is what matters, and we still don't know the die size of any of those parts either.

Point being, nvidia isn't going to lose any margins with larger dice unless they have an actual competitor that can sell a similar product. Right now, and perhaps when G84/G86 are released, it's all gravy...just like G70, even if the SOB was big for it's time, it had no competitor. When it did (R520), they were approx the same size chips even though ATi was already on 90nm. By the time it was overtaken (R580...'cause when R520 came out they were fairly similar) that's when they started reaping dough with the smaller die. Who's to say they won't do the same this time with an 80nm part against R600 or a 65nm part against R600 and perhaps R6-whatevernumber-0 if it actually makes it to market at a similar time? We have no clue how their architecture will do with the shrink and/or the scaling-down to mid-range parts. It may be largely beneficial, it may not be. Same for ATi, although with them we have no idea if their chips will be on time (history says 'no') at similar die sizes and similar/smaller processes. At this point, we just don't know...So it's hard to say nvidia is sitting in a tough place when right now it's looking almost identical to how things looked post g70-release and pre R520 release (Remember how much we expected from that chip?)

I realize that sounds Pro-N and Anti-Ati, but my allegiance is to the red (green?) team...It's just that realism overtakes optimism and fanboism at a certain point.
 
Last edited by a moderator:
Huh? G71 dropped ~20M transistors in the translation to 90nm, but nothing was lost caps or perf wise per clock, compared to G70. It's still 6-quads dual-MADD PS, 8 VS, 16 ROPs, 256-bit, etc, as G70 is.

They added a bit too tho, didn't they? All we know is the net; not the subtracted plus added to get there.
 
They added a bit too tho, didn't they? All we know is the net; not the subtracted plus added to get there.

Yep.
They "added" roughly 200MHz more to the core clock speed, while decreasing the memory clock speed (compared to the 7800 GTX 512MB) from 1.8GHz to 1.6GHz. ;)
 
Status
Not open for further replies.
Back
Top