Geeforcer: RV630 has 1/4 ROPs and 1/4 memory bus of R600. The ratio is exactly the same. We now, that R600 isn't BW limited in 3DMark06. Why shoud RV630 be?
It's very unlikely (scroll down). X1600 (4ROPs + 128bit bus) series embody similar results. Actually I have never seen any single situation, where would faster memory significantly increase 3DM06 score...Is the S.M 3.0 test in 3dmark06 bandwidth intensive?
Difference between X1900XTX and X1950XTX is ~30% (1550MHz/2000MHz), but difference in SM3.0 score is ~3%. R580 has 16ROPs for 256bit bus, RV630 has 4ROPs for 128bit bus. I can't se any reason why would RV630 be able to saturate 128bit so much better with just 8 TMUs a 4 ROPs than R580 it's 256bit bus using 16 TMUs and 16 ROPs.no-X, I can see faster RAM making for a higher score in these head-to-heads: X1950XTX vs. X1900XTX, X1950XT vs. X1900XT, 7800GTX-512 vs. 7950GT, X1300XT vs. X1600P--though I don't know if all were tested with the same drivers.
Is it enough to explain the difference in Hardspell's and PCOnline's scores? Dunno. Without thinking too hard about it, I'd guess RV630 would be better equipped to take advantage of the extra bandwidth than the similarly 128-bit RV530s above. Though the bandwidth difference of the RV630s isn't as pronounced as that of the RV530s, that score increase is at least naively (looking only at the hardware) theoretically possible (1100/700=1.57, 2105/1627=1.29).
Julidz, merde!
Do you find the following as telling the truth ?
...
Can the Radeon 9200 claim to have Vista AERO Glass desktop compatibility (DX9 hardware support required), as the FX 5200 does now ?
Difference between X1900XTX and X1950XTX is ~30% (1550MHz/2000MHz), but difference in SM3.0 score is ~3%. R580 has 16ROPs for 256bit bus, RV630 has 4ROPs for 128bit bus. I can't se any reason why would RV630 be able to saturate 128bit so much better with just 8 TMUs a 4 ROPs than R580 it's 256bit bus using 16 TMUs and 16 ROPs.
If RV630 is so much limited because of insufficient BW, that even 3DMark shows so massive (never-before-recorded) performance gain, why would ATi cripple this chip with 128bit memory bus?
I think reviewer's mistake is more likely (dyr says that the tested card is HD2600PRO with GDDR3... PRO's PCB supports DDR2, so the tested card is probably built on XT's PCB, which supports GDDR3. Maybe the reviewer identified the card mistakenly because of XT-like PCB). (maybe just for me) this theory is more likely than to believe, that HD2600XT-GDDR3 would be >20% slower than GF8600GT. It would mean, that HD2600XT-GDDR3 is actually a bit slower than X1650XT, which has the same BW, 30% lower texture-fillrate and older and slower shader core (110G-MADDs vec3+scalar for RV560 vs. 192G-MADDs superscalar for RV630).
Unfortunately tech report lies...there is no acceleration of HD content AT ALL w/ HD2900XT as it stands now. Their results are faked for that test, in the least.
Quality of the image is not the issue, becasue you still get the decoding happening on cpu, not videocard..hard for quality to differ when it's still beig decoded the exact same way.
Also, power reuirements, and thereby your electricity bill, are part of the issue, but that's IMHO.
3DMark only benchies should be tormented and killed. In this day and age they mean something along the lines of jacksquat. Remember all the cohorts posting R600 3DMark scores...yup, that really painted an accurate picture of what the chip can do ATM.
3DMark only benchies should be tormented and killed. In this day and age they mean something along the lines of jacksquat. Remember all the cohorts posting R600 3DMark scores...yup, that really painted an accurate picture of what the chip can do ATM.
Again, not sure of the details, but R6xx ROPs aren't R5xx ROPs. (Hell, they aren't even ROPs. ) Really, I don't know 3DM06's SM3/HDR bottlenecks, but as pixel-shader-heavy as R580 is, it's still remotely possible that RV630's unified architecture just handles 3DM06 better. How this relates to games (or anything else) is another matter, but I just pointed out some examples of bandwidth boosts translating to framerate gains.Difference between X1900XTX and X1950XTX is ~30% (1550MHz/2000MHz), but difference in SM3.0 score is ~3%. R580 has 16ROPs for 256bit bus, RV630 has 4ROPs for 128bit bus. I can't se any reason why would RV630 be able to saturate 128bit so much better with just 8 TMUs a 4 ROPs than R580 it's 256bit bus using 16 TMUs and 16 ROPs.
Overall cost? The Mobility market? "Forward-looking" WRT GDDR4 selling at less of a premium in the future?If RV630 is so much limited because of insufficient BW, that even 3DMark shows so massive (never-before-recorded) performance gain, why would ATi cripple this chip with 128bit memory bus?
The CF 2600XT has only 1.4GHz RAM, whereas the second, single 2600XT packs 2.2GHz. Same core on both.MSI's graphics showcase featured a dual-Radeon HD 2600 XT CrossFire setup squeezed onto a single (and rather large) board. [pic of 800/1400MHz CF card]
But don't expect to see this product in stores. MSI says performance is lackluster and that it's "not a good solution." Instead, AMD fans will want MSI's upcoming Radeon HD 2600 XT: [pic of 800/2200MHz card]
This model is passively cooled, and MSI says performance will be a little lower than that of Nvidia's GeForce 8600 GTS. However, pricing will be 20-25% lower—in fact, MSI says sub-$150 price tags are a certainty for 2600 XT offerings.
This model is passively cooled, and MSI says performance will be a little lower than that of Nvidia's GeForce 8600 GTS. However, pricing will be 20-25% lower—in fact, MSI says sub-$150 price tags are a certainty for 2600 XT offerings.
You can say that about every single game/benchmark that is used today however. There isn't a single game/benchmark that is representative of how any given card will perform with any other given game.
That said, lets burn every single post that only shows benchmarks from less than 10 or 12 games. As those aren't in the least representative.
Regards,
SB
You`re probably arguing with me because you see this as a post negative towards ATi. It`s more of a post negative towards the 3DMark wanking that is prevalent these days. I may be interested in how the COH DX10 path runs on two competitive cards, even though that may bear no relevance towards other games because...guess what, I can actually play COH, and there can be custom means of measuring performance other than the fixed inbuilt Performance test, thus breaking specific optimizations etc. I`m certainly not interested in a xxx point difference in 3DMark because:a)Can`t play it;b)It`s a fixed environment for which specific optimizations that translate to nothing else but that fixed environment are fairly prevalent. Would you have argued the same way had we still been in the days of the FX?The FXs sucked compared to ATis parts, but they were fairly close in 3DMark03/05/WTH was available back then...so yeah, it was very representative of future performance, and very useful to gauge the possible evolution of a chip. At least with a game, you`ll know how that performs, and you`ll actually be able to play it. IMHO, of course.