What do you expect for R650

What do you expect for HD Radeon X2950XTX

  • Faster then G80Ultra about 25-35% percent overall

    Votes: 23 16.4%
  • Faster then G80Ultra about 15-20% percent overall

    Votes: 18 12.9%
  • Faster then G80Ultra about 5-10% percent overall

    Votes: 18 12.9%
  • About same as G80Ultra

    Votes: 16 11.4%
  • Slower thenll G80Ultra about 5-10% percent overall

    Votes: 10 7.1%
  • Slower then G80Ultra about 15-25% percent overall

    Votes: 9 6.4%
  • I cannot guess right now

    Votes: 46 32.9%

  • Total voters
    140
yeah it was posted before, thats the interview with the reason for family launch, the wattage questions all that ;)
 
In the end such changes could have a tremendous effect on the performance of the chip. Using the 65 nm node will increase core clocks, as well as reduce leakage. The card could be clocked faster yet pull less power. We would also see the 512 bit memory bus finally get worked like it should. Oh yes, and at the high end we can finally expect to see 1 GB of GDDR-4. If AMD/ATI can actually implement these changes (easier said than done), then we could actually expect to see a product that would potentially outperform the high end 8800 Ultra. AMD/ATI would have a several month lead on NVIDIA's successor to the 8800, which is expected in November. Hopefully, they can get it done this time!http://www.penstarsys.com/#uvd

Interesting!
 
"Late summer" means "march next year" in ATI terms :p

// sorry, couldn't resist :oops:


On a serious note, I still think they'll cancel R650 and try to pull R700 forward.
 
  • Like
Reactions: jb
How do you know? I mean, I know they licensed it a few years ago, but has there been anything mentioned that they're actually using it for the R600?

It was told me by a source very near to ATI. There's no direct link to these informations, but I can assure you this is the case.
 
I don't know if it will be called "R650" or not, but my expectations for the next major high-end update of R6xx architecture are:

96 ALUs, 32 TMUs, 16ROPs, @ 800-900MHz, around 1 billion transistors, to be released Spring next year.

This gets back to the 3:1 golden ratio along with 96 ALUs Orton first muttered about a while ago. ATI seems to be 100% committed to 16 ROPs and with functionality being offloaded from them, I don't see them going beyond 16. The one billion transistors seems right for the chip with this layout and would also align with "one billion transistor chip under development" first mentioned during the merger coverage. Finally, such monster seems to be too early for the Fall, but Spring (and a year after R600) would make a lot of sense.
 
I think 96 ALUs / 24 TMUs (4:1) could be good, too. 3:1 (R580) was meant for pixel shading (8 VS weren't counted). ALU:Tex increases in time... 4:1 wasn't probably the best ratio for spring 07, but it could be quite good for fall/winter 07. 50% more ALUs and TMUs could significantly boost performance, some architecture tweaks or bug-fixes too and 15-20% higher clocks could end in ~175% performace of R600. Or maybe I'm just unduly optimistic...
 
How much design effort would be required to incorporate the UVD logic? Given the fuss over it's absence I'd assume that would the the first feature added though I'd equally assume they would want to make the minimal required engineering effort, doubling of texture units and ROPs seems a little far fetched.

Likewise, would it be a major effort to have the R650 ROPs operate at the memory clock rather than the core clock? Would this offer improved antialaising performance relative to the current arrangement?

How much gain does 55/65nm actually offer over the 80nm node?
 
Considering the 80nm process appears to be leaking like crazy there could be a fairly large jump going from 80->65. I'm gonna take a wild guess and say R600 was originally aimed at 900-1000MHz. Things started leaking and they started lowering clockspeeds to keep power under control.

Based on some of the overclocking results we've seen R600 has lots of headroom if the heat and power consumption aren't an issue. Plus some of the 65nm 2600XTs that are showing up look to be clocked at ~800MHz. When the mid range parts are clocked higher than the high end parts something is up.

UVD should just be a matter of throwing it on there. I really do hope they include it over the shader assisted acceleration they've got right now.

Also AMDs AA problem seems to be tied to some hardware conflict on the card. Assuming that gets fixed they wouldn't need massive speed increases match any offers from Nvidia.
 
Based on some of the overclocking results we've seen R600 has lots of headroom if the heat and power consumption aren't an issue. Plus some of the 65nm 2600XTs that are showing up look to be clocked at ~800MHz. When the mid range parts are clocked higher than the high end parts something is up.

Doesn't you paragraph in of itself explain what is up (or down, as the case may be), namely both the number of transistors AND process size? I think it would be shocking if GPU with about half the transistors and on a smaller, more efficient process was incapable of hitting higher frequencies. BTW, 8600 GTS is clocked 100MHz higher than its high-end counterpart, so that's not exactly out of the ordinary.

Also AMDs AA problem seems to be tied to some hardware conflict on the card. Assuming that gets fixed they wouldn't need massive speed increases match any offers from Nvidia.

I thought they were pretty clear on "it's not a bug, it's feature" in regard to AA resolve.
 
Um... Maybe I'm forgetting something, but how does decreasing feature size reduce current leakage? :???:

(The way I was taught, ) Current leakage is a bigger problem as you decrease transistor dimensions. The reduction in the threshold voltage gives an exponential rise to the subthreshold leakage current. And then there's the gate oxide leakage due to tunneling; this can be alleviated by using oxides of differing dielectric strength or other designs such as dual gate or Intel's tri-gate design but each has their own trade-offs for manufacturing.

Anyways, a couple papers on leakage: http://ieeexplore.ieee.org/iel5/92/28328/01266405.pdf
http://domino.research.ibm.com/acas/w3www_acas.nsf/images/projects_02.03/$FILE/02blaauw.pdf

I must be missing something...



The R666 should be a performance demon. :yep2:
 
It's not just a shrink, it's also a different process - plus, it does reduce the thermal envelope. Furthermore, we don't really know how much is leakage really responsible for the R600 power consumption. While it's very possible that high power draw is all due to leakge, we should also not forget that it beeing the largest GPU ever (transtior count-wise) running at the highert frequncy ever might just have something to do with the power draw.
 
While I'd agree it's hard to put any numbers on the leakage it does seem like a fairly plausible explanation. If the mid range parts are basically the same general design as the high end you'd expect them to suck down just as much power, with some improvements due to 65nm. And while I haven't seen any solid numbers for RV630 power usage all the passive cooling tells me they run fairly cool. Which in comparison to R600 seems rather odd.

Not 100% sure on this but if you throw in NVIO would the G80 have more transistors? Plus I'm sure the added cache of R600 really helps pack em in there.

I'm not sure I buy the whole "there isn't a problem with AA resolve" stuff that AMD is sending out. The documents point to it being there yet it seems rather worthless. You'd think they'd have completely cut it from the chip if they wanted the ALUs doing the math. And on top of that why not add additional ALUs to compensate for the loss of fixed hardware. While it probably isn't truly broken they probably disabled it because it was doing something odd, either weird memory access patterns or overloading buffers somewhere.

And I didn't mean to imply that reducing feature size would reduce leakage. I meant it along the lines of 65nm simply not having the leakage issues that 80nm had. So if 80->65 gave say a 15% increase in clockspeeds it would be 900*1.15 instead of 740*1.15 for instance.
 
Geeforcer said:
It's not just a shrink, it's also a different process - plus, it does reduce the thermal envelope.

What sort of process are they using :?:

Sorry, what do you mean by thermal envelope :?: :oops: The voltage reduction is just a necessity of the smaller transistor, but that entails an exponentially higher static power consumption from the leakage current...

But yah, You're right, it is hard to say how much of the power consumption is leakage, and I don't doubt the number of transistors and the clock speed contribute greatly to the high power draw, but in the end that same number of transistors themselves still contributes to the total leakage power.

And I didn't mean to imply that reducing feature size would reduce leakage. I meant it along the lines of 65nm simply not having the leakage issues that 80nm had. So if 80->65 gave say a 15% increase in clockspeeds it would be 900*1.15 instead of 740*1.15 for instance.

I didn't mean you in particular, there was a quotation earlier by Shtal. :oops:

Anyways... ah sorry, just my two cents on the physical hardware engineering... carry on. :oops:
 
Back
Top