The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
Remember, even a single 8800 GTX is CPU limited in most scenarios, and only in combinations of AA+AF+insane resolutions do we see GPU limitations with current and near-market software titles.

True....

But still if R600-XTX @ 2560x1600 resolution - on top of that, maybe like if supported for single R600 @ 32X AA + 16X AF. It will still be hard time to eat-up about ~100GB's bandwidth, it is more likely be GPU limitation at that point.
 
what do u consider insane resolutions? i dont know why people keep saying that we dont need faster than the g80 for anything under 2560 x 1600 when we clearly do.

Anything above 1600 x 1200/1680 x 1050.
I'm talking 1920 x 1080, 1920 x 1200, 2560 x 1600 and beyond, all the way up to the nirvana-like WHUXGA resolution (7680 × 4800, if you can find a display for it :D) .
 
Anything above 1600 x 1200/1680 x 1050.
I'm talking 1920 x 1080, 1920 x 1200, 2560 x 1600 and beyond, all the way up to the nirvana-like WHUXGA resolution (7680 × 4800, if you can find a display for it :D) .

1600x1200 isn't too far above 1920x1080 (8% more pixels). Considering that this resolution is supported by a number of the nicer widescreen LCDs as well as a number of HDTVs and is "True HD" (or whatever the marketing is) I wouldn't really call this resolution extreme, especially for a $400+ brand spanking new GPU. When the NV40/R420 series was released I paid special attention to the performance at 2MP (1600x1200) because at the time I had a 1600x1200 monitor. That was in the summer of 2004. We are now in winter 2007, so an 8% increase in pixels doesn't seem quite extreme to me at least.

2560x1600 could rightly be considered "extreme" due to the lack of marketing or supporting display products, but the 1080p market (and similar resolutions like 1920x1200) seem to be the "high end" market--exactly the users a GPU like G80 and R600 is targetting.
 
umm i dont consider 1920 x 1200 an extreme res, especially considering its going to become the standard for ps3 and dvds in the very near future(1920 x 1080 but close enough.) not to mention any relatively large lcd is going to have at least this res. ur basically trying to relegate even the most powerful of pcs to tiny screens. myself using a 24 incher at 1920 x 1200 could have a card twice as powerful as the 8800 and still need a lot more speed for current games to run at a steady 60 fps or better. id say with todays games, a card 3x as fast as g80 would be 100% completely ready for 1080p gamiing with aa/af.
 
So, GDDR4 seems to be restricted to 1.1, 1.2 and 1.4 GHz, unless someone has a pile of older GDDR4 stock in storage for PCB integration.
There is 1.6GHz GDDR4 too, fwiw, but I'm not sure what the yields are and if it's even still in production. The prices on that must be quite "interesting", too.
 
id be lying if i didnt admit that i have been entertaining the idea in my mind that r600 might be over 2x as fast as g80. can always hope.
 
Since when would a normal HDTV output 1920x1080 be interpreted as "insanely high"? What will R600 do for our Blue-Ray HD-DVD decoding.
 
Allow me to recap possible R600 XTX bandwidth figures, based on available Samsung GDDR4 parts (i've "rounded" them up a bit for easy reading):


R600 (512 bit bus):

- 1.0 GHz GDDR4 (x2): 128 GB/s
- 1.1 GHz GDDR4 (x2): 141 GB/s
- 1.2 GHz GDDR4 (x2): 154 GB/s
- 1.4 GHz GDDR4 (x2): 179 GB/s


Just for curiosity, here are the numbers for both the existing G80 GTX (384 bit bus), as well as if it was using the same GDDR4 chips detailed above, while keeping the same bus width.

G80 (384 bit bus):

- 0.9 GHz GDDR3 (x2): 86 GB/s (current version)
- 1.0 GHz GDDR4 (x2): 96 GB/s
- 1.1 GHz GDDR4 (x2): 106 GB/s
- 1.2 GHz GDDR4 (x2): 115 GB/s
- 1.4 GHz GDDR4 (x2): 134 GB/s



edit
Forgot to mention this, but the 1.0 GHz GDDR4 part (as used in the current X1950 XTX) is not listed anymore at the official Samsung semiconductor website, and was replaced by a 1.0 GHz GDDR3 chip.
So, GDDR4 seems to be restricted to 1.1, 1.2 and 1.4 GHz, unless someone has a pile of older GDDR4 stock in storage for PCB integration.
My personal bet would be at 1.2 GHz chips for the top R600 XTX, but i guess final decisions can be made very late, based on the number of different SKU's and RAM prices.

What makes you think the G80 can use gddr4
 
What makes you think the G80 can use gddr4
Perhaps because NVIDIA said so at Editors' Day? At worst, it might need a respin (think R580+), but I wouldn't be surprised if that wasn't even necessary. Of course, the board would have to be redesigned...
 
Low-end R600

Details from vr-zone

We have gotten some concrete information on the RV610 and RV630. The card samples will start appear in March and the launch slated for April timeframe. There will be 3 variants of RV630 supporting DDR2, DDR3 or DDR4. For low-end RV610, it will be built on 65nm process and there are 2 variants; RV610LE supporting DDR2 and RV610PRO supporting DDR3. Memory clock for RV610LE stood at 400Mhz while RV610PRO is at 700MHz.
 
So are we considering "extreme resolutions" to equal "anything higher than I personally have" now? :LOL: Should we ask Dell how many 2405 and 2407 they've sold at 1920x1200 in the last two years? Who do you think is buying those? The IGP crowd? :smile: If R600's bw advantage allows it to kick butt at 1920x1200 (with AA/AF, of course), then that it is going to be a signficant help in its ability to move units.
 
problem is recent games at those resoultions and high aa and af settings are more fillrate bound then bandwidth bound.
 
problem is recent games at those resoultions and high aa and af settings are more fillrate bound then bandwidth bound.

How so? The GTX takes a pretty significant performance hit for enabling AA. Also R600 presumably is balanced in such a way that it has enough fillrate or other features to effectively utilize it's enormous bandwidth.
 
http://firingsquad.com/hardware/nvidia_geforce_8800_preview/page14.asp

actaully quite a few games are pixel fillrate bound, and fillrates tend to be more affected as you go up in AA more then bandwidth, so I don't see how the extra bandwidth will be usable if the shader power and fillrates aren't higher then then 8800's.

lets take q4 for example at 1920x 1200, the gts and x1950xt which have the same bandwidth and very close fillrates, but as the res gets bumpped up one more notch suddenly the gts jumps up.

If we look at Fear from the same review the extact opposite happens, why not sure doesn't really fit the bill.

Oblivion both cards are equal as expected with equally matched cards but as you overclock the gts core the gts gets very close to the gtx performance.

COD 2 tied all the way through as expected


http://www.tweaktown.com/articles/977/11/page_11_benchmarks_high_quality_aa_and_af/index.html

this one is a bit easier to follow, since modes are equal.

HL2 is the only game that sticks out like a sore thumb, why with the same bandwidth is the gts quite a bit faster at HL2?

Bandwidth isn't playing much of a roll here, at higher settings. Which is expected going to x8 AA would only increase fillrate needs by two times over x4 AA will the bandwidth need also increase x2? I really don't think it will.

Trini AA also effects fillrates, not just bandwidth, to a much higher degree actually.
 
Because its a compeletely different architecture.


well that means the bandwidth has very little to do with the over all results. The shader core as a lot more to do with it. As with many other games.

This also happens in the Far Cry engine 1 and upcoming Crysis engine as well, not to mention Unreal 3 technology too.
 
Status
Not open for further replies.
Back
Top