Heh, that reminds me, when I saw one of those at Newegg's site the other day, I thought "that's not a powersupply, that's a heater."
I know I haven't kept up with the "up to par" performance segment, but sheesh, 1000Watt PS. There is no way this segment can keep going in this direction. If that's what it takes to play the latest PC games, then PC gaming might as well be dead.
ATI has had split clock domains for a long time in current chips.
Thats not differing clock domains, thats engine scaling. Different things.Yeah but not for performance, at least not in the sense that NV uses clock domains. ATi only does it to lower power consumption/heat output (lower clocks in "2D" mode).
Thats not differing clock domains, thats engine scaling. Different things.
So do ATi GPUs use multiple clock domains as others have claimed?
NVIDIA have gone from a minor reliance on clock domains in his last generation to a rather heavy reliance on them in their current generation. Do you see that as an approach that AMD might find useful in the future? If not, why not?
Well, I think we have over 30 clock domains in our chip, so asynchronous or pseudo-synchronous interfaces are well understood by us. But the concept of running significant parts of the chip at higher levels than others is generally a good idea. You do need to balance the benefits vs. the costs, but it's certainly something we could do if it made sense. In the R600, we decided to run at a high clock for most of the design, which gives it some unique properties, such as sustaining polygon rates of 700Mpoly/sec, in some of our tessellation demos. There's benefits and costs that need to be analyzed for every product.
Exactly my point: a completely meaningless remark that pretends to answer the question. Clock domains are well understood by everyone. These days, there's not a chip in the world that doesn't have multiple clock domains.
Exactly my point: a completely meaningless remark that pretends to answer the question. Clock domains are well understood by everyone. These days, there's not a chip in the world that doesn't have multiple clock domains.
As long as he doesn't specify exactly what they are used for, he might as well have said that the sky is blue: same amount of information content.
Edit: by not being specific enough, the interviewer made it extremely easy to answer the question without revealing anything useful.
If you consider sub-par "top-end" performance by AMD/ATI, then yes the price cut matches nicely. However, Nvidia is truly top-end, and their prices show it as well. So far nothing has really changed in that playing field when you consider performance/price ratios.
The latest top notch PC system (Intel) does not need over 500 watts. The newer CPUs use less power than the system listed below.
Netiquette, shmetiquette--that's just old news! The thread you linked links the original source at the bottom of the post, that German (some would say French) site.
I'm not sure if this was covered, and I hate to compare cards that haven't been released. But assuming the R700 is two smaller RV770 chips on one card, wouldn't it be cheaper and better business for AMD to make their high-end card than it would for nVidia to make their 1+ billion tranny GT200? Or would there be some other manufacturing aspect to putting the two chips together that would counter out just placing one chip on one card?
All depends on yields what 55 nm version they are using and how well two rv770 scales
Is it 100% certain that GT200 is 1+billion single chip? I mean there is this "Tegra" thing floating around. Isn't R700 supposed to be some new kind of multicore thingy, dulacore but not crossfire? Maybe NV is doing something similar -> Tegra = multicore but not SLI?I'm not sure if this was covered, and I hate to compare cards that haven't been released. But assuming the R700 is two smaller RV770 chips on one card, wouldn't it be cheaper and better business for AMD to make their high-end card than it would for nVidia to make their 1+ billion tranny GT200? Or would there be some other manufacturing aspect to putting the two chips together that would counter out just placing one chip on one card?
Is it 100% certain that GT200 is 1+billion single chip? I mean there is this "Tegra" thing floating around. Isn't R700 supposed to be some new kind of multicore thingy, dulacore but not crossfire? Maybe NV is doing something similar -> Tegra = multicore but not SLI?