AMD: R7xx Speculation

Status
Not open for further replies.
ATI RV770 = ATI R650

I remember long time ago when Dave Orton said R6xx GPU generation will go up to 96SP's while R600 was 64SP's.

I believe R650 is renamed RV770 with DX10.1 add-on.
 
Heh, that reminds me, when I saw one of those at Newegg's site the other day, I thought "that's not a powersupply, that's a heater."

I know I haven't kept up with the "up to par" performance segment, but sheesh, 1000Watt PS. There is no way this segment can keep going in this direction. If that's what it takes to play the latest PC games, then PC gaming might as well be dead.

Even if 1000w were needed for the highest of the high ends (which looking at Brits post its not even close), that wouldn't translate into being what it takes to play the latest PC games.

My system runs fine with a C2D and 8800GTS 640MB on a 430w PS. And it runs all the latest games just fine. A Penryn coupled with a 9600GT would draw even less power for a similar experience.

It looks as though ATI is trying to take power efficiency into account with RV7xx aswell so hopefully we won't see any serious increases in power requirements this generation.
 
So do ATi GPUs use multiple clock domains as others have claimed?

It can say for a fact that ATI has been using multiple clock domains since the early nineties.

VGA interface clock domain, PCI clock domain, memory clock domain, GPU engine clock domain. ;)

So when SirEric claims in an interview that they have a whole bunch of them, he's exactly right. He's also not saying anything informative, as long as he doesn't claim whether or not it has decoupled shader vs rest of the GPU engine clocks.
 
http://beyond3d.com/content/interviews/39/7

NVIDIA have gone from a minor reliance on clock domains in his last generation to a rather heavy reliance on them in their current generation. Do you see that as an approach that AMD might find useful in the future? If not, why not?

Well, I think we have over 30 clock domains in our chip, so asynchronous or pseudo-synchronous interfaces are well understood by us. But the concept of running significant parts of the chip at higher levels than others is generally a good idea. You do need to balance the benefits vs. the costs, but it's certainly something we could do if it made sense. In the R600, we decided to run at a high clock for most of the design, which gives it some unique properties, such as sustaining polygon rates of 700Mpoly/sec, in some of our tessellation demos. There's benefits and costs that need to be analyzed for every product.
 
Exactly my point: a completely meaningless remark that pretends to answer the question. Clock domains are well understood by everyone. These days, there's not a chip in the world that doesn't have multiple clock domains.
As long as he doesn't specify exactly what they are used for, he might as well have said that the sky is blue: same amount of information content.

Edit: by not being specific enough, the interviewer made it extremely easy to answer the question without revealing anything useful. ;)
 
Exactly my point: a completely meaningless remark that pretends to answer the question. Clock domains are well understood by everyone. These days, there's not a chip in the world that doesn't have multiple clock domains.
As long as he doesn't specify exactly what they are used for, he might as well have said that the sky is blue: same amount of information content.

Edit: by not being specific enough, the interviewer made it extremely easy to answer the question without revealing anything useful. ;)

Chances are high that was his(or those who combed the answers) goal:D
 
If you consider sub-par "top-end" performance by AMD/ATI, then yes the price cut matches nicely. However, Nvidia is truly top-end, and their prices show it as well. So far nothing has really changed in that playing field when you consider performance/price ratios.

Well if you consider 300ish USD for the 9800 GTX a truly top end price, then I'd have to agree with you. :)

If not for ATI and their renewed focus on Performance in the Mid-Range. That 9800 GTX would probably have shown up as a 8900 GTX, in the past, with a price much much closer to the 8800 GTX Ultra.

It can be argued that ATI is only focused on performance in the Mid-Range because they can't currently compete in the high end enthusiast space. But regardless of WHY they are focused there, it HAS brought down prices of all video cards quite significantly.

Something I'm sure Nvidia isn't very fond of.

And considering ATI appears to be sticking to this focus for their cards (affordable performance) it should keep prices for video cards down in general. Although I'm sure there will always be room for the Uber Special Edition Super Expensive video card. I suppose a GX2 or X2 could be considered in this segment, but meh.

Yes, I do somewhat miss the heady days of constant and rapid releases of Video cards that pushed the performance envelope. But my wallet and energy bill is thanking ATI for pushing prices back down, even if it may not have been by choice.

Regards,
SB
 
The latest top notch PC system (Intel) does not need over 500 watts. The newer CPUs use less power than the system listed below.

A lot of people know that. However when you got money to burn and want the best, 1000watts sounds uber.

This kid in the last year had had a 2900xt replaced by 3 - 3870XTs (lasted enough for a few weeks of that uber game the internet is enamored of - 3DMark LOL) and then replace by a nvidia card.) Throw in new case, raid raptors, a few TB harddrives, Q6600 .... You are getting the picture right. Yous just MUST have ne of those 1000Watt power supplies.

Its his hobby and he doesn't have to pay for it.

Me - I have a pretty decent system (for work and play) but I have a gasp 2600xt... something I bought as a fill-in for new Motherboard until i decided what I wanted. I have not been playing any games after I got tired of LOTR, so there is absolutely nothing I require a good video card for. Howver I might be doing Age of Conan so will order up a card if and when I decide.. move the 2600xt to my wife's machine.

Maybe there might be a different choice in May... hopefully the RV770 is out and that may be the choice or whatever is ideal at that time.
 
Netiquette, shmetiquette--that's just old news! :p The thread you linked links the original source at the bottom of the post, that German (some would say French) site.

The presence of Umlaut (on vowels) and Eszett (characters almost exclusive to German) make it quite obvious this is German. Even if the Umlaut was replaced with an "e" following the vowel, and the Eszett was replaced with 2 "s", those letter combos would be totally out of place in French text.

You're rigorous regarding hardware design, please do not be dismissive of other disciplines. Thank you.
 
Heh, I was (gently!) mocking the poster I linked. I'm aware of the difference b/w French and German, and not aware that I was dismissing either language. (Heck, I'm not entirely lost with one of them.) If your point is that I was favoring one to the detriment of the other, that wasn't my intention, either.

I'm not sure I have something to apologize for, but I was never aware of the Eszett, so thanks for that. :) Lexical_quiver++!
 
I'm not sure if this was covered, and I hate to compare cards that haven't been released. But assuming the R700 is two smaller RV770 chips on one card, wouldn't it be cheaper and better business for AMD to make their high-end card than it would for nVidia to make their 1+ billion tranny GT200? Or would there be some other manufacturing aspect to putting the two chips together that would counter out just placing one chip on one card?
 
I'm not sure if this was covered, and I hate to compare cards that haven't been released. But assuming the R700 is two smaller RV770 chips on one card, wouldn't it be cheaper and better business for AMD to make their high-end card than it would for nVidia to make their 1+ billion tranny GT200? Or would there be some other manufacturing aspect to putting the two chips together that would counter out just placing one chip on one card?

All depends on yields what 55 nm version they are using and how well two rv770 scales
 
I'm not sure if this was covered, and I hate to compare cards that haven't been released. But assuming the R700 is two smaller RV770 chips on one card, wouldn't it be cheaper and better business for AMD to make their high-end card than it would for nVidia to make their 1+ billion tranny GT200? Or would there be some other manufacturing aspect to putting the two chips together that would counter out just placing one chip on one card?
Is it 100% certain that GT200 is 1+billion single chip? I mean there is this "Tegra" thing floating around. Isn't R700 supposed to be some new kind of multicore thingy, dulacore but not crossfire? Maybe NV is doing something similar -> Tegra = multicore but not SLI?
 
Is it 100% certain that GT200 is 1+billion single chip? I mean there is this "Tegra" thing floating around. Isn't R700 supposed to be some new kind of multicore thingy, dulacore but not crossfire? Maybe NV is doing something similar -> Tegra = multicore but not SLI?

Ok one of the downfalls of the 3870x2 was intergpu communications and the fact that textures had to copied twice as memory was not shared leadingly to low low min fps if this solved ATI MAY HAVE a chance.

TEGRA something else.
 
Status
Not open for further replies.
Back
Top