So, do we know anything about RV670 yet?

How much silicon would be added if they went single-cycle 4x :?:

noob mode -> Perhaps they didn't think it important enough given how some games aren't fully compatible between HDR and AA (except through hacks). :| Not to say that I wouldn't have minded having it. I'd have thought AA would be more important for "lower" resolutions Are the 16 ROPs or TMUs sufficient there too :?: I mean, one could argue that AA isn't as important the higher the number of pixels, but then if there isn't enough pixel-pushing power there... :???:
 
RV670 Revival maybe aka 2950Pro pictured!

rv670revivalak7.jpg


source

Maybe a good revival of R600 @ 256Bit + 55nm ;)

Another picture:
shemerv670ep1.png


die-size seems to be ~250mm²
 
Last edited by a moderator:
I hope they've put a fan like that (ie with blades round the right way) on the dual slot version :???:
 
RV670 Revival maybe aka 2950Pro pictured!

Looks like AMD not learn the lesson, and put VRM's the same area on the PCB where overheating on the x1950pro with the same style cooling solution.
Overheating VRM's cause another problem too hard to change the reference cooler to a third party one because there is no good third party VRM heatsink/cooler, artic cooling VRM heat speader is useless.
 
Last edited by a moderator:
Or just a high shader-utilization.
In oZones "Fur" Rendering Benchmark(extrem multi-pass, MADD-using) I saw a drop-down of ~50% on R600 and ~20% on R580, when 4xMSAA was activated.

This benchmark is 100% shader limited on my GTS. And I'm talking a 41% shader only increase gains about 42% in the benchmark. Looks like it's hitting the shaders pretty hard so I can see R600 taking an even bigger AA hit here than it does in games.

8800GTS - 1680x1050

Core/Shader = FPS

513/1188 (stock) = 24
513/1674 (+41% shader) = 34 (+42%)
674/1674 (+41% shader, +31% core) = 34 (+42%)

4x AA performance is kinda weird though.

Core/Shader/Mem = FPS

513/1674/792 = 30
513/1674/890 (+12% mem) = 30 (+0%)
674/1674/792 (+31% core) = 33 (+10%)

Why would a core clock boost improve 4xAA performance when bandwidth doesn't if ROPs are capable of single cycle 4xAA?
 
Or just a high shader-utilization.
In oZones "Fur" Rendering Benchmark(extrem multi-pass, MADD-using) I saw a drop-down of ~50% on R600 and ~20% on R580, when 4xMSAA was activated.
Probably you've enabled AA through CCC, which is not the correct way of testing if the application [Fur Bench] offers settings to adjust AA. That way, on my 2900 I get around 3% hit with 4xAA, enabled in the app.
 
Probably you've enabled AA through CCC, which is not the correct way of testing if the application [Fur Bench] offers settings to adjust AA. That way, on my 2900 I get around 3% hit with 4xAA, enabled in the app.

Is then AA really enabled? ;)
 
Why would a core clock boost improve 4xAA performance when bandwidth doesn't if ROPs are capable of single cycle 4xAA?
Perhaps because MSAA doesn't need any memory bandwidth at all?

If there's an internal color cache, just like AMD shows on the R600 slides, then everything related to the ROPs never goes outside of the chip.

Testing on R580 and R600 revealed ridiculous impact of the memory bandwidth on AA performance, as intended btw.

For now, it seems R600's AA problem lies in the "fastpath", not being large enough:

- RV630 shows exactly the same drop when using MSAA, so it's not a matter of SPU's utilization.
- no AA is just extremely fast, indicating that ROPs number is fine
 
Or maybe the current driver AA impl in the R600 marchitechture hogs the chip with a pile of waste work.
If you take, for example, two AA modes with the same 4x base pattern, so let's say Edge-Detect (12x) and the conventional Box filter and compare them, you'll see that the first one is actually a tad faster, despite the way more samples resolved per filter-kernel than the boxed one.
 
According to FUDzilla, RV670 is ahead of schedule.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=3217&Itemid=1

Supposedly ATI had assumed that production silicon would be revision A12, and release would therefore be in Q1/08, but it turns out that revision A11 is actually bug-free (for the first time in ATI's history!) so the release has been brought forward to Q4/07.

When true than launch can be very close to g92 launch and this can be great from user aspect, competition all we need :smile:
 
Last edited by a moderator:
When true than launch can be very close to g92 launch and this can be great from user aspect, competition all we need :smile:

I heard about start of volume-production of RV670-cards is tomorrow, so November-launch could be very probably.:D

Maybe also interessting:
RV670Pro - Revival:
62W GPU, 104W total

RV670XT - Gladiator:
85W GPU, 132W total
 
I heard about start of volume-production of RV670-cards is tomorrow, so November-launch could be very probably.:D

Maybe also interessting:


I saw that too (@ chiphell) and agree that that is very possible.

If power consumption is correct, which seem quite likely, I wouldn't expect any huge overclocks being possible with only 150W of available power. Odd that with only a 10% clock increase the power consumption goes up more than 1/3....unless of course the pro is butched in some way (ie 240 instead of 320 shaders)


Still, in my speculation, the slightly higher/ similar clocks will surely help (vs R600) if the AA is still done in the shaders vs using the available bandwidth, in which case the cut to 256-bit (and roughly 2/3 speed of R600 with 1200mhz GDDR4) won't be much of a penalty.

Looks like this could be a tight package and actually very competitive seeing that it looks to be on time. I guess we'll have to see what Nvidia actually releases come Nov...
 
Last edited by a moderator:
Back
Top