So, do we know anything about RV670 yet?

Fudo is still quite sure that R680 is two RV670 chips on the same board:

http://www.fudzilla.com/index.php?option=com_content&task=view&id=4023&Itemid=1

Two RV670s on a single PCB

We have told you a few times about this card, but we can now confirm that R680 is up and running, However, it won’t be launched before the early next year.

The R680 is made of two RV670 chips on a single PCB and AMD is still working on a driver for such a card. This is the card that will make quad Crossfire possible and theoretically, with four of these cards in an 790FX motherboard, you will be able to make even octo Crossfire. That said, we are not aware of any plans for such a platformance or a driver.

AMD has already made the cards with two 2600PRO chips but no one dared to launch the product in retail, so making the card with two RV670PRO chips was not as difficult, as people had some experience with two cores on a single PCB. The great thing is that a new R680 card consumes less power than dual 2600PRO cards, as its chips are 55 nanometer.

If all goes well, AMD might even demonstrate R680 this year.
 
Guess Fudo missed the slide where Gigabyte already demonstrated "Quad Crossfire" with HD2600's, and the video where FSX was run by 4x Radeons?

And I'm still suspicious about the whole R680 issue, R680 is chip codename, not a board codename, and even if you ignore that, why would board with 2 RV670's need it's own driver strings, why couldn't it just use RV670 strings?
 
Nope, the whole 4200-4600 line was out at once, 4200 wasn't any faster than 4400, it had both lower core and mem clocks.
4400 was "replaced" by 4800SE, and 4600 by 4800 later on, they were the "AGP 8x" versions of the said chips

Thanks for the correction Kaotik, now that you mention it I remember. That naming convention for the replacment cards was also slightly confusing then as well, glad things have not changed.
 
cant you guys look at games where the R600 does very well when AA is turned on and ones where is does horrible when AA is turned on and based on that figure out the most plausible explanation for why AA takes such a hit?
 
everyone under the sun said:
4200 was the shit!

Yeah, I forgot it was the awkwardly named MX's back then. My apologies. I hadn't used an nvidia product until recently and didn't recall the exact setup that far back. Come on, who didn't have a 9700pro (or 9500pro->9700pro) in those days? :)

My point stands. If everyone loved the 4200 for it's price/performance (compared to the mx) then surely games were designed with this in mind. The higher up the scale the avg is, the better.
 
Last edited by a moderator:
At least then you knew what you were getting, and could depend on games being designed towards a smaller amount of hardware, and you could expect decent performance from a mid-range solution as most developers planned on gamers having that level of performance. I think the 8800gt (and perhaps 3870) thankfully will bring some of that back though, with games designed toward that level of performance and not above it (G80/R600) or below it (the craptacular G84/Rv630).

Actually as far back as I can remember Devs have either...

1. Targeted what they thought would be a high end setup by the time their game released (Quake 1+2, Unreal, UT, Doom3, Farcry, UE3, Crysis, etc) and put in scaling options to lower graphics quality/resolution to perform well on lower end cards/systems. I know many people that couldn't run Quake 1 higher than 320x240, and even that was 20 FPS or below, same went for Quake 2 and Unreal.

Or...

2. Target what they think will be the midrange (Half-Life 2) and throw in some added visual goodies for those with better cards. Valve did an amazing job at getting it to run extremely well on mid-range cards. But I somehow think this had more to do with the extreme delays in its release.

So me, personally I hope the main engine devs continue to target what they think will be the ultra high end and allow scaling options so that people with lesser hardware can still play.

Consoles are there for Devs/Gamers wanting a static platform as the target for developement.

Regards,
SB
 
More like the 9700/9500 went unchallenged for several months, while the Ti4x00s were still paired agaisnt the 8500s.

By the same standard you would have to say the more recent comparisons like 7800 -> X850 and 8800 -> X1950 were not proper. I agree that the 9700/9500 were unmatched in their segments during their entire lifespans, but at one point it was the Ti 4x00 series they were putting the hurt on, and later the FX series.
 
By the same standard you would have to say the more recent comparisons like 7800 -> X850 and 8800 -> X1950 were not proper.
Of couse, the x850s share the same market as the 6800s and the x1950s shares the market with the 7900s. There is no reason to match them up agaisnt cards they were never ment to sit on the same shelf as
 
Of couse, the x850s share the same market as the 6800s and the x1950s shares the market with the 7900s. There is no reason to match them up agaisnt cards they were never ment to sit on the same shelf as

While I agree it's not a "fair" comparison, sometimes it's all you've got. You have to compare what's actually on the market. Now with situations like 8800 GT vs. HD 3850/3870 I think it makes sense to do this comparison than to say "ATi's only $200 part right now is the 2900 GT, so let's compare the 8800 GT to it."
 
http://www.theinquirer.net/gb/inquirer/news/2007/11/05/riva-tuner-supports-radeon-hd

It's L'inq, of course, but according to them, new RivaTuner unveiled a feature of HD38xx chips - the clock frequency will apparently be dynamicly adjusted based on the load all the time.

How's that supposed to work? The load during rendering on various units is all over the place even within a single frame. Not sure how dynamic clocking (especially if it's across the whole chip) is feasible or even useful in this case.
 
The load during rendering on various units is all over the place even within a single frame. Not sure how dynamic clocking (especially if it's across the whole chip) is feasible or even useful in this case.
I read it as a means to downclock when not in "3D". For example under Windows Classic thene you'd get "2D" clocks, under Aero "2,5D clocks" and in gaming "3D" clocks. Heavy scenes vs cutscenes is just marketing IMHO.
 
Back
Top