The Official RV630/RV610 Rumours & Speculation Thread

Status
Not open for further replies.
http://www.gecube.com/products-detail-sas.php?prod_id=66137#Specification


GC-RX26XTG4-E3_B.jpg



The card looks great, but I'm not digging that 128bit bus still.
 
I still not understand what you mean exactly as backchannel features? :cry:
Try last time, and when i still not understand i give up :smile:
He means features that are available but outside of the formal API you're using to program the hardware. So if a developer wants to use R600's tesselator in a D3D10 game (for example, you could pick a truckload of others from D3D9 programming), they'll have to code two paths. D3D10 was designed to pretty much avoid that kind of thing, where you must be feature complete with very few exceptions and certainly none that implement the main rendering stages.
 
Yes I've heard the "AGP is still good" line before. I musta had a momentary lapse in judgement to publically state that a dead BUS should just die ((knowing I'd bring out the AGP junkies )). I still think theres no reason to hold onto AGP these days. I honestly dont see how AMD as a platform company could not want to divert people to PCIE at this point. AGP is such a step backwards.

I'm with ya, bro. I'm ready to use salt, pennies on the eyes, silver bullets, wooden stakes thru the heart; whatever it takes to put this "Zombie Bus From Hell" to rest.
 
Bah, just start selling Geotechbyte Gefroce 8800 Ultra AGP cards for $100 that have mock-up chips on them and will actually short out the bus upon installation. Do the world a favor while making a little money as a reward.
 
Bah, just start selling Geotechbyte Gefroce 8800 Ultra AGP cards for $100 that have mock-up chips on them and will actually short out the bus upon installation. Do the world a favor while making a little money as a reward.

:LOL: Devious plan...I like it :D
 
http://www.gecube.com/products-detail-sas.php?prod_id=66137#Specification


The card looks great, but I'm not digging that 128bit bus still.

At 2200Mhz, though, it will have nearly the same bandwidth as a 1900GT @ 1200Mhz/256-bit. If you can eek out an overclock of 200mhz, you'd be right there. It's not great, but it's not horrible either, especially for a DX10 mainstream part. Add to the fact that as a folder, it is probably going to be pretty good. Not a part for a gamer, per se, but for a spare or dedicated folding box not bad at all.

I have a SFF that I'm using as a Vista test box/folder that could use a DX10 card for testing purposes. I came very close to getting an 8600GT (MSI OC'd- 580/1600 - $136 from the 'egg), but decided to wait and see how these do. I hope it doesn't turn out like the R600, and then I end up PO'd because I could have gotten a GTS sooner and had the same or better performance. My faith in ATI/AMD has been shaken lately, and I'm not thrilled at the rumors of Barcelona delays, either :(.
 
Last edited by a moderator:
At the very least you have some price reduction to look forward to.

You would think so, but then again, there isn't a huge markup on mainstream parts to begin with. The GT I looked at was pretty cheap anyway - $136. It might knock down the price of the GTS parts, though, that's true. Some of those are over $200. Of course, I wouldn't buy an 8600 GTS to begin with - I don't need stunning performance since it's for a test box and I'd probably overclock it some :D. I doubt there is much headroom in those overpriced GTS cards. I'd be willing to get closer to $200 though for a 512MB/GDDR4 card, though.

My gaming box has an 8800GTS 640MB in it, so I'm not exactly hurting for highend performance 3D. I am semi-in-need, though, of a mainstream DX10 part because I don't run Vista on anything but that one test machine. Hopefully AMD's offering in that category will be better than what nVidia has so far shown us (in my opinion, not much).
 
He means features that are available but outside of the formal API you're using to program the hardware. So if a developer wants to use R600's tesselator in a D3D10 game (for example, you could pick a truckload of others from D3D9 programming), they'll have to code two paths. D3D10 was designed to pretty much avoid that kind of thing, where you must be feature complete with very few exceptions and certainly none that implement the main rendering stages.
The question isn't about features though it's about a bus. What can a PCI-E card do besides perform better than an AGP card cannot do? To my knowledge developers don't need to code differently for AGP systems.
 
I keep forgetting, what is the ETA on these cards again?

Did you miss it? The Family Launch? On May 14th? That's the reason AMD pushed back the HD 2900XT launch, right? So those cards must have been launched then unless they were disowned from the family.

Oh, am I being too cynical? ;)
 

Oh, hold on a minute. I need to go post my own futuremark ORB scores in another forum that debunk what that site posted for a 640MB GTS vs. an ATI 2900XT with the magical 8.38 drivers. Oddly my AMD X2 4000+ and Geforce 640 GTS are faster than a Core 2 Extreme and Geforce 640 GTS setup (which of course loses to the 2900XT using the new magic 8.38 drivers). I am experiencing a lack of suspension of disbelief in anything that particular website posts this evening, but thanks for trying ;).

Sorry if that is OT, but it rather came up when the site got mentioned.
 
The question isn't about features though it's about a bus. What can a PCI-E card do besides perform better than an AGP card cannot do? To my knowledge developers don't need to code differently for AGP systems.
Ah, I just jumped in without reading the surrounding context :oops: You're right I think. Other than requiring (although there are some poor PCIe controllers out there in terms of this) PCIe for good readback performance, I can't think of anything either.
 
Site says that this data comes from Sapphire.

Yes

some percentual comparision:


HD2600PRO

"Sefadu"
core: 600MHz
mem: 500MHz (1GHz)
bus: 128bit

ALU: 30% HD2900XT
TEX: 41% HD2900XT
PIX: 20% HD2900XT
B/W: 15% HD2900XT


HD2600XT-GDDR3

"Orloff"
core: 800MHz
mem: 700MHz (1,4GHz) GDDR3
bus: 128bit

ALU: 41% HD2900XT
TEX: 54% HD2900XT
PIX: 27% HD2900XT
B/W: 21% HD2900XT


HD2600XT-GDDR4

"Kohinnoor"
core: 800MHz
mem: 1100MHz (2,2GHz) GDDR4
bus: 128bit

ALU: 41% HD2900XT
TEX: 54% HD2900XT
PIX: 27% HD2900XT
B/W: 33% HD2900XT


XT GDDR4 ~ almost 1/2 of HD2900XT (performance-wise)
XT GDDR3 ~ 1/3 of HD2900XT (performance-wise)
PRO DDR2 ~ 1/4 of HD2900XT (performance-wise)
 
Why are the two HD 2600 XT PCI-Express SKU's so different when they share the same designation ?

I mean, one has 2.2GHz GDDR4, while the other sticks with 1.4GHz GDDR3. That's a huge (12.8 GB/s) gap in available bandwidth.
Wouldn't a little more coherence in the naming scheme be less confusing to the prospective buyers ?
 
Status
Not open for further replies.
Back
Top