The Official RV630/RV610 Rumours & Speculation Thread

Status
Not open for further replies.
Geeforcer: RV630 has 1/4 ROPs and 1/4 memory bus of R600. The ratio is exactly the same. We now, that R600 isn't BW limited in 3DMark06. Why shoud RV630 be?
 
Geeforcer: RV630 has 1/4 ROPs and 1/4 memory bus of R600. The ratio is exactly the same. We now, that R600 isn't BW limited in 3DMark06. Why shoud RV630 be?

And how do we know this? You realise the R600 can match an Ultra in 3dmark06. Theres only one or two things that the R600 is superior in compared to the G80 ultra. One of them is bandwidth.

Is the S.M 3.0 test in 3dmark06 bandwidth intensive?
 
no-X, I can see faster RAM making for a higher score in these head-to-heads: X1950XTX vs. X1900XTX, X1950XT vs. X1900XT, 7800GTX-512 vs. 7950GT, X1300XT vs. X1600P--though I don't know if all were tested with the same drivers.

Is it enough to explain the difference in Hardspell's and PCOnline's scores? Dunno. Without thinking too hard about it, I'd guess RV630 would be better equipped to take advantage of the extra bandwidth than the similarly 128-bit RV530s above. Though the bandwidth difference of the RV630s isn't as pronounced as that of the RV530s, that score increase is at least naively (looking only at the hardware) theoretically possible (1100/700=1.57, 2105/1627=1.29).

Julidz, merde! :)
 
Julidz: Yes, it's quite logical. Even if the review at pconline tested HD2600PRO, and not XT-3, XT-4 will not end better than 10% bellow GTS at best...
 
no-X, I can see faster RAM making for a higher score in these head-to-heads: X1950XTX vs. X1900XTX, X1950XT vs. X1900XT, 7800GTX-512 vs. 7950GT, X1300XT vs. X1600P--though I don't know if all were tested with the same drivers.

Is it enough to explain the difference in Hardspell's and PCOnline's scores? Dunno. Without thinking too hard about it, I'd guess RV630 would be better equipped to take advantage of the extra bandwidth than the similarly 128-bit RV530s above. Though the bandwidth difference of the RV630s isn't as pronounced as that of the RV530s, that score increase is at least naively (looking only at the hardware) theoretically possible (1100/700=1.57, 2105/1627=1.29).

Julidz, merde! :)
Difference between X1900XTX and X1950XTX is ~30% (1550MHz/2000MHz), but difference in SM3.0 score is ~3%. R580 has 16ROPs for 256bit bus, RV630 has 4ROPs for 128bit bus. I can't se any reason why would RV630 be able to saturate 128bit so much better with just 8 TMUs a 4 ROPs than R580 it's 256bit bus using 16 TMUs and 16 ROPs.

If RV630 is so much limited because of insufficient BW, that even 3DMark shows so massive (never-before-recorded) performance gain, why would ATi cripple this chip with 128bit memory bus?

I think reviewer's mistake is more likely (dyr says that the tested card is HD2600PRO with GDDR3... PRO's PCB supports DDR2, so the tested card is probably built on XT's PCB, which supports GDDR3. Maybe the reviewer identified the card mistakenly because of XT-like PCB). (maybe just for me) this theory is more likely than to believe, that HD2600XT-GDDR3 would be >20% slower than GF8600GT. It would mean, that HD2600XT-GDDR3 is actually a bit slower than X1650XT, which has the same BW, 30% lower texture-fillrate and older and slower shader core (110G-MADDs vec3+scalar for RV560 vs. 192G-MADDs superscalar for RV630).
 
Do you find the following as telling the truth ?


...
Can the Radeon 9200 claim to have Vista AERO Glass desktop compatibility (DX9 hardware support required), as the FX 5200 does now ?

I think Zaphod answered the bulk of your post pretty well as I was going to, but as for this relationship you see between 9200/5200 and the Vista AERO desktop, I'm a bit surprised and puzzled by your chronology. These cards are several years old and years out of date, even as value propositions today, and pre-dated Vista by years. Why confuse the current bit of FUD under discussion in this thread with products that are years old, and remarks ATi made (whether you think them FUD or not) several years ago, for a different time and an entirely different market, on an entirely different subject?

I mean, does saying "Well, this is the FUD I think ATi was guilty of several years ago," which concerns nigh obsolete products, in any way cast any light on the *current* bit of FUD casually examined in this particular thread, which has to do with the current products both companies are shipping? Does it somehow make the current FUD talked about here somehow more acceptable? I don't see how...

I think, too, that this issue of the hardware acceleration of UVD in the HD 2900 has been blown way out of proportion. It's not that the HD2900 doesn't "support" UVD playback to a UVD-capable display, it's only that the HD2900 doesn't hardware accelerate that playback. At least, that's how I understand it at the moment. According to Tech Report's analysis, the difference has nothing to do with the quality of the display playback, but only to do with the system reporting a higher cpu utilization when doing such playback.
 
:!: Unfortunately tech report lies...there is no acceleration of HD content AT ALL w/ HD2900XT as it stands now. Their results are faked for that test, in the least.:!:

Quality of the image is not the issue, becasue you still get the decoding happening on cpu, not videocard..hard for quality to differ when it's still beig decoded the exact same way.:yep2:


Also, power reuirements, and thereby your electricity bill, are part of the issue, but that's IMHO.
 
Difference between X1900XTX and X1950XTX is ~30% (1550MHz/2000MHz), but difference in SM3.0 score is ~3%. R580 has 16ROPs for 256bit bus, RV630 has 4ROPs for 128bit bus. I can't se any reason why would RV630 be able to saturate 128bit so much better with just 8 TMUs a 4 ROPs than R580 it's 256bit bus using 16 TMUs and 16 ROPs.

If RV630 is so much limited because of insufficient BW, that even 3DMark shows so massive (never-before-recorded) performance gain, why would ATi cripple this chip with 128bit memory bus?

I think reviewer's mistake is more likely (dyr says that the tested card is HD2600PRO with GDDR3... PRO's PCB supports DDR2, so the tested card is probably built on XT's PCB, which supports GDDR3. Maybe the reviewer identified the card mistakenly because of XT-like PCB). (maybe just for me) this theory is more likely than to believe, that HD2600XT-GDDR3 would be >20% slower than GF8600GT. It would mean, that HD2600XT-GDDR3 is actually a bit slower than X1650XT, which has the same BW, 30% lower texture-fillrate and older and slower shader core (110G-MADDs vec3+scalar for RV560 vs. 192G-MADDs superscalar for RV630).

Or maybe they're mistaken just due the fact that pretty much every brand seems to have their own custom PCB's on the 2400 & 2600 series of cards, some feature PCIe power plugs on XT PCB. some don't, some are noticeably longer than others etc etc


:!: Unfortunately tech report lies...there is no acceleration of HD content AT ALL w/ HD2900XT as it stands now. Their results are faked for that test, in the least.:!:

Quality of the image is not the issue, becasue you still get the decoding happening on cpu, not videocard..hard for quality to differ when it's still beig decoded the exact same way.:yep2:


Also, power reuirements, and thereby your electricity bill, are part of the issue, but that's IMHO.

The latest drivers do enable hardware acceleration of VC-1 & h.264 decoding via the shader core on 2900, but the image gets all distorted with it.
 
lol i do not call that decoding, I call it re-encoding to garbage!


I get no decode @ all @ 1080p, BTW. Still doesn't change the fact that TechReport did not have any driver that was capable of accelerated decoding at the time that they published thier review.:yep2:


Back on topic...


I really enjoy seeing the multitude of design differences in pcb. I wonder how each will clock...:devilish:
 
3DMark only benchies should be tormented and killed. In this day and age they mean something along the lines of jacksquat. Remember all the cohorts posting R600 3DMark scores...yup, that really painted an accurate picture of what the chip can do ATM.
 
3DMark only benchies should be tormented and killed. In this day and age they mean something along the lines of jacksquat. Remember all the cohorts posting R600 3DMark scores...yup, that really painted an accurate picture of what the chip can do ATM.

You can say that about every single game/benchmark that is used today however. There isn't a single game/benchmark that is representative of how any given card will perform with any other given game.

That said, lets burn every single post that only shows benchmarks from less than 10 or 12 games. As those aren't in the least representative. :D

Regards,
SB
 
3DMark only benchies should be tormented and killed. In this day and age they mean something along the lines of jacksquat. Remember all the cohorts posting R600 3DMark scores...yup, that really painted an accurate picture of what the chip can do ATM.

Id love you to tell that to the guys at XS. :LOL:

Theres people jumping to R600 because it plays 3dmark so well than the G80 based cards
 
Difference between X1900XTX and X1950XTX is ~30% (1550MHz/2000MHz), but difference in SM3.0 score is ~3%. R580 has 16ROPs for 256bit bus, RV630 has 4ROPs for 128bit bus. I can't se any reason why would RV630 be able to saturate 128bit so much better with just 8 TMUs a 4 ROPs than R580 it's 256bit bus using 16 TMUs and 16 ROPs.
Again, not sure of the details, but R6xx ROPs aren't R5xx ROPs. (Hell, they aren't even ROPs. :)) Really, I don't know 3DM06's SM3/HDR bottlenecks, but as pixel-shader-heavy as R580 is, it's still remotely possible that RV630's unified architecture just handles 3DM06 better. How this relates to games (or anything else) is another matter, but I just pointed out some examples of bandwidth boosts translating to framerate gains.

If RV630 is so much limited because of insufficient BW, that even 3DMark shows so massive (never-before-recorded) performance gain, why would ATi cripple this chip with 128bit memory bus?
Overall cost? The Mobility market? "Forward-looking" WRT GDDR4 selling at less of a premium in the future?

Anyway, dunno about which board PCO tested, but TR has a small write-up of MSI at Computex and they offer the following nuggets:

MSI's graphics showcase featured a dual-Radeon HD 2600 XT CrossFire setup squeezed onto a single (and rather large) board. [pic of 800/1400MHz CF card]

But don't expect to see this product in stores. MSI says performance is lackluster and that it's "not a good solution." Instead, AMD fans will want MSI's upcoming Radeon HD 2600 XT: [pic of 800/2200MHz card]

This model is passively cooled, and MSI says performance will be a little lower than that of Nvidia's GeForce 8600 GTS. However, pricing will be 20-25% lower—in fact, MSI says sub-$150 price tags are a certainty for 2600 XT offerings.
The CF 2600XT has only 1.4GHz RAM, whereas the second, single 2600XT packs 2.2GHz. Same core on both.

On the plus side, and unlike the X1600XT at launch, it appears AMD will offer the 2600XT for a more reasonable initial price. We'll see if NV's forced to react, what with 8600GTSs going for ~$170 at NewEgg. (so, ~10%, which can fall under the "a little lower" category).
 
This model is passively cooled, and MSI says performance will be a little lower than that of Nvidia's GeForce 8600 GTS. However, pricing will be 20-25% lower—in fact, MSI says sub-$150 price tags are a certainty for 2600 XT offerings.

Overpriced for 150$, and this is only the gddr3 version.

First generation dx10 mainstream cards are jokes, for 50-60% money more user can buy +~150% performance, but this is not mainstream price range anymore, nv try to push "mainstream" category in the 270-300$ price range, and i say no thanks, and i think most of the users say this too, they never pay more than 200$ for any VGA, so they are stuck if they want a good price/performance dx10 card.
Well of course its not nv fault only, they want earn so many money as possible in the first place, its amd fault too can't release better products, not even months later than nv.

We need a third competitor because this what happening now in the lower segments not user friendly anymore.
 
You can say that about every single game/benchmark that is used today however. There isn't a single game/benchmark that is representative of how any given card will perform with any other given game.

That said, lets burn every single post that only shows benchmarks from less than 10 or 12 games. As those aren't in the least representative. :D

Regards,
SB

You`re probably arguing with me because you see this as a post negative towards ATi. It`s more of a post negative towards the 3DMark wanking that is prevalent these days. I may be interested in how the COH DX10 path runs on two competitive cards, even though that may bear no relevance towards other games because...guess what, I can actually play COH, and there can be custom means of measuring performance other than the fixed inbuilt Performance test, thus breaking specific optimizations etc. I`m certainly not interested in a xxx point difference in 3DMark because:a)Can`t play it;b)It`s a fixed environment for which specific optimizations that translate to nothing else but that fixed environment are fairly prevalent. Would you have argued the same way had we still been in the days of the FX?The FXs sucked compared to ATis parts, but they were fairly close in 3DMark03/05/WTH was available back then...so yeah, it was very representative of future performance, and very useful to gauge the possible evolution of a chip. At least with a game, you`ll know how that performs, and you`ll actually be able to play it. IMHO, of course.
 
You`re probably arguing with me because you see this as a post negative towards ATi. It`s more of a post negative towards the 3DMark wanking that is prevalent these days. I may be interested in how the COH DX10 path runs on two competitive cards, even though that may bear no relevance towards other games because...guess what, I can actually play COH, and there can be custom means of measuring performance other than the fixed inbuilt Performance test, thus breaking specific optimizations etc. I`m certainly not interested in a xxx point difference in 3DMark because:a)Can`t play it;b)It`s a fixed environment for which specific optimizations that translate to nothing else but that fixed environment are fairly prevalent. Would you have argued the same way had we still been in the days of the FX?The FXs sucked compared to ATis parts, but they were fairly close in 3DMark03/05/WTH was available back then...so yeah, it was very representative of future performance, and very useful to gauge the possible evolution of a chip. At least with a game, you`ll know how that performs, and you`ll actually be able to play it. IMHO, of course.

Actually I wasn't taking it as pro/anti ATI or pro/anti NV.

It's the problem that there is no one game/benchmark/anything that can give a generalized view of any card.

3dmark matches up with some games. Doesn't match up with most.

Likewise, CoH matches up with some games, doesn't match up with most.

Same goes for Oblivion, Quake 4, Doom 3, etc...

As long as I understand this basic facet of benchmarking, then I have no problem with any benchmark used.

Anytime someone only posts the scores for one game, it's rather meaningless to try to decifer the general performance of a card.

The various 3dmark's are interesting because they tend to focus on the performance of certain aspects of a gaming system. Some tend to focus more on CPU vs graphics, or some more shader vs fillrate or otherwise. I don't use it to base my purchasing decisions, just like I don't base my purchasing decisions on how well a card does only in Quake 4. Or only in Supreme Commander, etc.

Thus my remark that unless a post references 10 or more games/benches it's worthless to me as a general guage of where X card stands vs Y card.

It's just my personal extension of your dissatisfaction with 3dmark06 numbers.

I'm sure you'd probably say the same thing if Quake 3 numbers were posted over and over and over and over again. It'd have roughly the same bearing on general performance as 3dmark does. Or if Far Cry numbers were continuously parroted.

Anyway, my only point with that was that now more than ever before you can't use a small sampling of benches to come up with any meaningful generalization of how a card does with regards to it's competition or even within it's own family of cards. Unless, of course, you're gaming habits are predominantly dominated by a certain type of game/genre. And even then performance can vary wildy.

Regards,
SB
 
Status
Not open for further replies.
Back
Top