The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
Why? Last gen power consumption was the same relative gap between entry-level and low-end. IIRC the 7300 (not GT) consumes around 15w, and the 7600 around 50w. Another example is RV530 vs Rv560, about 35W vs. 55W. If RV560 would've been 90nm like RV530, one might assume it would be closer to 35W vs 60-something watts.

What if the Rv630 has twice the shaders etc (guesstimate: 16 vs 8) and twice the memory (256mb vs 128mb)?

It could make sense.
 
Last edited by a moderator:
If I assume that RV6x0 parts won't truly exceed 128bit buswidth, then there seems to be a gap in my layman's reasoning. Let me get this str8: high end GPUs "need" a 512bit bus, while mainstream GPUs don't even get half of it? What am I missing?
 
RV610, the low end version will only have 64-bit memory interface

Since when has the low end been 64-bit only? Isn't X1300 128-bit? What's the big idea here unless they are planning to start a new trend to celebrate DX10? :D I think this must be BS.
 
Since when has the low end been 64-bit only? Isn't X1300 128-bit? What's the big idea here unless they are planning to start a new trend to celebrate DX10? :D I think this must be BS.

Some X1300 are 64-bit, the others 128-bit, depends on just how long you want to go.
 
If I assume that RV6x0 parts won't truly exceed 128bit buswidth, then there seems to be a gap in my layman's reasoning. Let me get this str8: high end GPUs "need" a 512bit bus, while mainstream GPUs don't even get half of it? What am I missing?

Apparently the same thing I'm missing. It doesn't make sense to me, especially a 64-bit bus. That's pretty darn low low-end. But I agree with you questioning; why a 512-bit bus for high-end, and a quarter of that for mainstream. And then an eighth of it for the low end?
 
The high end is about performance, the low end is about rolling in the dough. Clearly a 256-bit bus in the midrange would cut into profits, and a 128-bit bus might be to much for the $50 X2300SE.
 
Yes but it's not 64-bit only. But what the hell, if you're going to produce 64-bit-only chips for the low end what's better time start to than DX10.

Possibly the cheap availability of faster memory to negate the need for a 128-bit bus. I still doubt anything but the very lowest end (such as SE and LE cards) would have 64-bit, as the X1300 series cards were both 64-bit and 128-bit.
 
The first question why?
The second question which version will have better cooling system (12.4" or 9.5") since they are going to be same exact cards. Unless if 9.5" version will take up 3 slots instead 2 for cooling.

I was under the impression the OEM 12.4" card was the A12 version and the Retail 9.5" card is the A15. Of course, this was my impression.

US
 
Interesting. So am I to conclude NV will put the 640MB GTS against a 512MB XT? And will the RAM size overshadow the bus speed in most people's minds (assuming price parity, with the 320MB GTS undercutting both)?

Then again, the gap b/w the current GTS and GTX is nothing like the minor one in the 6800 generation, so maybe I should stop trying to find historical analogies to every single new GPU generation. I may also be giving ATI too little credit for whatever a 512MB, 512b XT might be, though I may also be wrong in thinking it'll be direct competition for a (current or imminently updated [8900?]) GTS.
 
Last edited by a moderator:
Here we go with huge memory sizes again :(. At 512MB it's all about the bandwidth. Too bad poor souls at Best Buy don't know that half the time when they go out and buy a slow bandwidth card with lots of memory on it ;P.

I wonder the bandwidth performance on the 640MB. If its better, than perhaps it will be more interesting to consider.
 
Interesting. So am I to conclude NV will put a 640MB GT against a 512MB XT? And will the RAM size overshadow the bus speed in most people's minds (assuming price parity, with the 320MB GT undercutting both)?

Then again, the gap b/w the current GT and GTX is nothing like the minor one in the 6800 generation, so maybe I should stop trying to find historical analogies to every single new GPU generation. I may also be giving ATI too little credit for whatever a 512MB, 512b XT might be, though I may also be wrong in thinking it'll be direct competition for a (current or imminent) GT.

Could this hypothetical "8800 GT" use the shorter GTS' PCB, the same 384bit bus of the GTX (the space for the two extra RAM chips is there, unused), but lower clocks on the core and 384MB of GDDR3, instead of 768MB ?

It would fit in nicely between the GTS and the GTX, i think.
And it sure would make a price war between it and the R600 XT/XL more interesting, for sure.
 
Last edited by a moderator:
1 thing i think is certain is that the 8800 gts will absolutly not beat atis 2nd most powerful r600 as in nearly every game it ties or is barely faster than atis x1950
 
Could this hypothetical "8800 GT" use the shorter GTS' PCB, the same 384bit bus of the GTX (the space for the two extra RAM chips is there, unused), but lower clocks on the core and 384MB of GDDR3, instead of 768MB ?
:oops:

I wasn't trying to hint at something, that was just me forgetting the "S" in the current GTS. I don't think it's a stretch to assume a GT at some point, but I don't know anything about it, and I'm not trying to hint at anything. Sorry if I caused any confusion.
 
The high end is about performance, the low end is about rolling in the dough. Clearly a 256-bit bus in the midrange would cut into profits, and a 128-bit bus might be to much for the $50 X2300SE.

I don't see why a $250 GPU with 256bits would cut into "profits"; if then what is RV570? A technology exersize? Normally you get roughly half the bandwidth with midrange GPUs. Now assuming that R600 will have over 120GB/s of bandwidth, in order to get "just" 50GB/s on a 128bit bus, you'd need something short of 1.6GHz GDDR4.

I should note that most of us naturally realise that with a midrange GPU you don't expect to use high AA sample densities in high resolutions, but either the relevant newsblurbs are bullshit or the gap between midrange and high end truly leaves a large question mark until further details are known.
 
1 thing i think is certain is that the 8800 gts will absolutly not beat atis 2nd most powerful r600 as in nearly every game it ties or is barely faster than atis x1950

Depends on the exact configuration of ATI's 2nd most powerful R600 GPU.

By the way I'd suggest that coming games are more fitting for comparing D3D10 GPUs, one example would be here:

http://www.anandtech.com/video/showdoc.aspx?i=2895&p=4

Nearly 40% more is hardly "ties" in my book heh....
 
that game is such a shoddy port, dont know if its indicative of any future pc games. i mean 60 fps at a pretty low res with no aa on the most powerful video card, especially considering how mediocre the game looks is pretty bad.
 
that game is such a shoddy port, dont know if its indicative of any future pc games. i mean 60 fps at a pretty low res with no aa on the most powerful video card, especially considering how mediocre the game looks is pretty bad.

It's supposed to be based on the UE3 engine and under DX9.0 I don't think we'll see any MSAA with that one anyway.

As for it being indicative or not, pick your poison instead with a game like Oblivion:

http://www.techreport.com/reviews/2006q4/geforce-8800/index.x?pg=14

This one is hardly a "shoddy port" and it ain't as CPU limited as HL2 and the likes.
 
its faster in oblivion too, but in most other games its basically even or slightly ahead/behind. hl2 isnt cpu limited at high res with aa af, gpu performance is way down from lower res.

actually going an l ooking at a bunch of reviews, in games with no hdr the 8800 gts is usually right around the r580.
 
Last edited by a moderator:
I don't see why a $250 GPU with 256bits would cut into "profits"; if then what is RV570? A technology exersize? Normally you get roughly half the bandwidth with midrange GPUs. Now assuming that R600 will have over 120GB/s of bandwidth, in order to get "just" 50GB/s on a 128bit bus, you'd need something short of 1.6GHz GDDR4.

I should note that most of us naturally realise that with a midrange GPU you don't expect to use high AA sample densities in high resolutions, but either the relevant newsblurbs are bullshit or the gap between midrange and high end truly leaves a large question mark until further details are known.

I don't think RV630 will be $250, nor do I think it'll be midrange. I think it is more of a <$200 card, as RV610 will be ~$100. I'm thinking 1/4 and 1/8 R600, in which both a 128-bit and 64-bit bus both sound feasible. On the echelon of gfx cards from crap to golden (x1300/x1600/x1650/x1900gt-x1950pro/x1900xt/x) I believe these are the the x1300 and x1600 as CJ stated, meant for the more-so low-end (where the money is). While this is not meant to mean I think RV630 will be as terrible as the x1600 was last gen (because obviously making the choice to go with a R580-style x1300 wasn't enough for even low-end mid-range, it was also a unique situation) I believe best candidates for a 256-bit bus would be RV660 and RV670, if not the later with >256-bit; presumably the x1650 and x1950pro of the R600 generation. Those will probably fall in the ~$200/~$300 range. If they they keep in the same shader fraction as last gen, it's hard to imagine a RV660 (1/2 a R600) with a 128-bit bus. It's a shame we have to wait so long (presumably Q3) to actually have decent mid-range cards from AMD...but they should be exciting when they do come.

Until that time, we'll just have to settle for however they butcher the R600 to meet the XL and perhaps GT/O monikers and price tags (ie filling the gap in the mid-range until those parts arrive), but at least we'll prolly still get the 512-bit bus on those parts.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top