The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
Wouldn't it be sad if the drivers that include those strings (probably cats that will ship with the card) are WHQL before nvidia gets their equal out the door? Granted, they probably won't show up until in regular Cats until 7.3/7.4, but looking at nvidia ATM, that still might beat them. :LOL:

Here's shouting a loud prayer to a silent god the R6xx series works with AFR/OGL/D3D10/CF right out the gate on Vista.

considering their timing, I would think that Crossifre, and OGL got a good look over. At least I hope.(Cross fingers)
 
This four card stuff is good stuff. ATI is doing right.

You have to be the fastest, and as long as Nvidia had quad SLI and ATI didn't it was a problem.
 
They would need to get the hardware out the door first.

Exactly the point; no they wouldn't. It could ship same time with the card (meaning it was certified before release as one would hope), or perhaps a driver release around the series release date could include the support/features along with better OGL/CF/AFR support for legacy in vista, as they seem to be working on that hardcore lately in each monthly driver release.

Of course, it took a couple cat releases for ATi to finally support their last couple cards even after their release date, so that might be asking a bit much. I mean sure, some dude just forgot to include the strings in 6.9, but still. :p

Irregardless, they're (hardware/drivers) getting close enough to smell like they're ready, which is always a good sign.

As for 4 GPU support...Whatever. I just want AMD to support asymmetric processing (lesser non-matching card in the 4x slot for physics) on the p965...and I'll be a happy guy.
 
Last edited by a moderator:
Exactly the point; no they wouldn't.

That's a rather perverse point-of-view IMO. NVIDIA are the spawn of Satan because they can't get drivers out the door to support their hardware on an operating system which wasn't available when folks bought that hardware (hardware which works fine under XP)? But ATI can come out smelling of roses by having launch-day drivers for hardware which is 3+ months late coming to market? Weird. Ah well.
 
But i think with R580/R600 it's quite different. Many of Nvidias additional transistors were used up simply by the fact that they had to switch towards a more threaded approach, which Ati already had done - hence the large number of transistors on R5xx-products.
The threaded design surely is a point. I second that. But another is the USC Design and fullfilling the claims of D3D10. And of course the enhancement of the efficiency is very expensive. NV has only 16x Vec8 Units (with very complex control logic for scalar fetches).
DAAMIT seems to go the other way. Quantity instead of efficiency. (or kind of a middle way)
64 or 96 Vec4/5 ALUs would do more raw-FLOPs per clock. The higher number would eat transistors too.
Anyway: I can see your point. Only 64x Vec4/5 ALUs for ~700 mio transistors? That sounds a bit poor. Maybe this info is wrong or there are some surprises we don't know about yet. ;)
 
Anyway: I can see your point. Only 64x Vec4/5 ALUs for ~700 mio transistors? That sounds a bit poor. Maybe this info is wrong or there are some surprises we don't know about yet. ;)
Disabled "quads"? Anyway, R520 was similar a bit - only 16 "pipelines" and >320M of transistors... R600 can be scalable in a similar way and further ALUs could be cheap in term of transistor count.
 
Disabled "quads"? Anyway, R520 was similar a bit - only 16 "pipelines" and >320M of transistors... R600 can be scalable in a similar way and further ALUs could be cheap in term of transistor count.

That's exactly what he's saying - if ALU's are cheap 64 of them sounds too low for 700 mio trannies.
 
Now, at the INQ, Fuad did update his R600 story by adding the R600 Uber edition!! which he suggested it would come with watercooling on the card (not the add-on unit) and the long one will be equiped with DDR4 but the short one would be DDR3. Meanwhile, they also suggested that "AMD plans an absolute GPGPU monster" by telling that it will be the R600 with 2-4GB memory... sounds rather carzy :???:

He probably heard that R600 could support up to 4GB and went on another one of his fantasy binges. Chip support is one thing, getting that much memory onto a PCB is another. Are there even 1024-Mbit modules?
 
http://vr-zone.com/?i=4622

ATi R600XTX/XT/XL Series Unveiled

VR-Zone has learned about some new details on 80nm R600 today and there will be 2 SKUs at launch; XTX and XT. There will be 2 versions of R600XTX; one is for OEM/SI and the other for retail. Both feature 1GB DDR4 memories on board but the OEM version is 12.4" long to be exact and the retail is 9.5" long. The above picture shows a 12.4" OEM version. The power consumption of the card is huge at 270W for 12" version and 240W for 9.5" version. As for R600XT, it will have 512MB of GDDR3 memories onboard, 9.5" long and consumes 240W of power. Lastly, there is a cheaper R600XL SKU to be launched at a later date.

R600XTX.jpg
 
cooler fins+heatpipe on the XTX looks the same as on the XT, just the fan on the XT is on top of the fins, whereas the xtx has it next to the block.
 
Wow that's a lot of copper! I wonder why the OEM version pulls more power than the Retail. And why is there that handle looking thing to the right of the fan?
 
And why is there that handle looking thing to the right of the fan?

We tried to explain that a lot of times right? It's more of a shipping bracket.
The handle slides in to a slot (which you can lock) so that during shipping there is even less change of card movement.

there are pictures in this thread of other cards that are server/workstation based that have the same handle.

it's a OEM card.. please..
 
Wow that's a lot of copper! I wonder why the OEM version pulls more power than the Retail. And why is there that handle looking thing to the right of the fan?

Is it possible that the OEM version used the core that cannot get into the retail specification? I understand that when testing the core logic, it always results into saparation in grade A, B, C... A may be go for retail and B may be for OEM, etc. and basing on this assumption.... the OEM version will not suitable for overclock!! as the OEM card is not intended to be modified or overclock... Just my guess....

Edit: Typo...
 
Man, am I getting tired of that OEM card shot. Who the heck cares? If that's not what we'll be buying, who cares?

The interesting part is that their own shot clearly shows two 6-pin connectors, which limits the power usage to 225W, and then their text claims 270W. Err? :???:
 
I'd imagine the difference is simply the revision that it was based on. They got one that worked and placed an order for some. Then revised it to run cooler/faster and then started ordering those. After a month or two everything should be based on the newer of the two revisions.

I'd say it's entirely possible they're setting on a stockpile of the OEM bound chips atm.
 
Man, am I getting tired of that OEM card shot. Who the heck cares? If that's not what we'll be buying, who cares?

The interesting part is that their own shot clearly shows two 6-pin connectors, which limits the power usage to 225W, and then their text claims 270W. Err? :???:

I thought the same thing, but if you zoom in on the left connector, it looks like there are two unused power pins. Maybe this is a lower clocked dev part and the release will have a 6 and an 8 pin.
 
Status
Not open for further replies.
Back
Top