The Official RV630/RV610 Rumours & Speculation Thread

Status
Not open for further replies.
In one of the reviews I read it was upto 1.1GHz for GDDR4.

ah yes. I thought it would be logical for them to continue with 1ghz gddr4, being on the 1950 and all.
Then again the 2900's came with the 1.1ghz as well...

at least they won't go for 900mhz...
 
In one of the reviews I read it was upto 1.1GHz for GDDR4.

Just think if they actually went with a 256-bit memory lane instead of 128-bit, ah well I guess that is where the RV670 comes into the light. Of course, 70GB/sec might be a bit excessive for a midrange part (or they could use cheaper memory like 800Mhz GDDR3 and still achieve over 50GB/sec of bandwidth).

No idea if the extra cost in PCB and routing can offset the price difference between GDDR3 and GDDR4 though.
 
4y87hag.jpg


Wondering why the Triangles/sec for RV630 is higher than R600 :!:
 
2900 and 2600's setup rates are 1 per clock, 2400 is 1 every two clocks. [Edit] Which also means that HD 2600 XT's tesselation peak is higher as well.
 
If the 2600 is 45w-50w then I'd be interested to see what a Crossfire version of this would do for say $300 and a max of about 100w. Certainly that would not need a psu upgrade for most folks and might be an example of where more than one card makes a lot of sense.

The 8600 nvidia is approx 1/3 the spec of the 8800GTS and does roughly 50% so if 2600 could pull off the same trick from it's 1/3 spec ( rough rule of thumb here ) then Crossfire of these for 100w and $300 would fill a nice gap.

I'm definitely interested in seeing how these perform and overclock and who brings out the GDDR4 version at what cost.
 
http://forum.folding-community.org/ftopic19851-0-asc-30.html
mhouston said:
HD 2900XT FAH Summary:

The shipping client requires an update to run. The core is currently running, but having issues with the full client. Performance-wise, the 2900XT is 2.2X faster than R580 for the force calculation, but the surrounding kernels and the interaction with the CPU currently limit the speed up to 45% faster for the full client. We are working to improve speed and get more cores supported for the next release. Power wise, it depends on board temp, but we see 160-180W* when folding, roughly double R580. I haven't tested on the 2600/2400 yet, but it should work. The 2600 is *very* power efficient, and the 2400 in some configs doesn't even have a fan (!).

We are currently making lots of updates to the GPU client, including support for more cores and WUs (although debugging is taking longer than expected). When we get through this next push, we will release a new client which will add official support the newer chips.
And:
http://forum.folding-community.org/ftopic19851-0-asc-30.html#bot
I haven't had enough time with a 2600 to tell you. The power requirements are a heck of a lot lower. I think it's going to be a toss-up between R580 which should start getting really cheap and 2600, but the 2600 will give you DX10 support for gaming, good HD video support, and HDMI.

As for CTM, it should help our CPU load and play better with multiple GPUs (DirectX under XP seems to serialize CPU communication to GPUs), but we need to work on getting the surrounding kernels tuned. This should also help R5XX performance. I still haven't heard anything more about Vista support (CUDA struggles there as well...)
 
3.2 Gpix/s fill rate and 6.4 Gtex/s filtering rate and 120 sp vs a lot more fill rate and filtering rate and 32 sp.
great. :mad:
 
3.2 Gpix/s fill rate and 6.4 Gtex/s filtering rate and 120 sp vs a lot more fill rate and filtering rate and 32 sp.
great. :mad:

AFAIK, G84 has more texture address units, but the same filtering capabilities per cluster than G80. So filtering ratio on RV630 vs G84 is better than R600 vs G80, they should be almost equal in fitlering.
 
AFAIK, G84 has more texture address units, but the same filtering capabilities per cluster than G80. So filtering ratio on RV630 vs G84 is better than R600 vs G80, they should be almost equal in fitlering.
good catch. I missed that.
still trilinear filtering is still faster on g84. maybe rv630 might perform better. I have no basis to speculate one way or another.
 
I would think, from a strategic point of view, it would make most sense to play towards an apparent strength -- smaller and cheaper. While NV has yet to demonstrate 65nm, ATI has two on deck. If they can win the market where the majority of sales are made, they can choke NV off. Putting out an R600 at 65nm would theoretically be easier than pulling off an architectural shift plus die shrink. Presumably the next battle is for the back-to-school market. It remains to be seen what the R6x0 chips look like, but I think most will agree that the G84/6 were pretty 'meh' (at best). If ATI can refresh the R600 quickly at 65nm, I would espect them to do that, and suck the life right out of the GTS parts.

Not sure how much of a redesign it would take to add more bilerps to their TUs. I expect they would if they could, but winning the high end is icing at this point. ATI looks like it has a decent shot at doing well in the lucrative section of the market, and AMD is looking for positive cash flow....
2 remarks:
Nvidia has already several 65nm parts in production

for Tiers One OEMs back-to-school market, RV610/630 are too late. G84/86 will own nearly all mid range market until xmas refresh...
 
for Tiers One OEMs back-to-school market, RV610/630 are too late. G84/86 will own nearly all mid range market until xmas refresh...

Well, we're really in the wrong thread for this, but what makes you think so? Working with OEMs is actually usually an early forward-looking thing, whereas channel is near the tail of the process, isn't it? I'm getting confident vibes out of AMD re RV6xx design wins with OEMs. Whether it will outsell G8x or not is a different question, but no sense of them having missed the boat with OEMs this cycle.
 
gecube%202900xt.jpg



Looks like Gecube not use the reference cooler, i hope this card can be overclocked very good :smile:

That is a very sexy looking card. THAT I would buy, provided that it has good IQ. I have a spare PC that could use an upgrade, and 512MB of GDDR4 might just blow the doors off of an 8600GTS (in fact I'd bet on it).

The fact that it will very likely fold well is also a motivator for me. Add to that it will likely fit comfortably in a SFF - I like it :D.
 
Last edited by a moderator:
Well, we're really in the wrong thread for this, but what makes you think so? Working with OEMs is actually usually an early forward-looking thing, whereas channel is near the tail of the process, isn't it? I'm getting confident vibes out of AMD re RV6xx design wins with OEMs. Whether it will outsell G8x or not is a different question, but no sense of them having missed the boat with OEMs this cycle.

well, I can say that because I sold GPUs to Tiers one OEMs and I know validation time process. For example, final qualification at HP for back-to-school is early June with final hardware and shipments must start early July. Only chance for AMD is to have in the 2 coming weeks final hardware and final software working perfectly with HP bundle and passing strict HP compatibility tests. Looking at the messy/buggy actual R600 drivers, I don't see how they can fix it in 2 weeks. In this business, where shipment capacity (we talk about millions pcs in one month) and schedule execution are keys to success. The guys who make decisions at HP don't like to take any risk and they don't live on promises. If they feel it has a single risk of choosing AMD, they will always chose the secured option.
is it enough argument for you ? :cool:
 
Last edited by a moderator:
Wouldn't it be the case that notebook qualification would be event further out than desktop? If so, why how or why would HP have already picked HD 2600 for a new notebook design?

You're right, sorry my bad. of course I was talking about addin boards GPU on desktop business. Laptop and integrated are other stories, they involve for example co-hardware development.
 
Status
Not open for further replies.
Back
Top