I don't know exactly yet, but isn't the "multithreaded rendering" stuff in it even helping multi GPU cards in distributing the workload?
Nahh.
I don't know exactly yet, but isn't the "multithreaded rendering" stuff in it even helping multi GPU cards in distributing the workload?
Too true. Well unless they can make multi-gpu gaming actually work just as well, instead of super patchy performance that works great in some games and crappy in others.
I don't know exactly yet, but isn't the "multithreaded rendering" stuff in it even helping multi GPU cards in distributing the workload?
I think the TS, UAV and CS stuff all basically fragment rendering into such tiny pieces and irregular, transitory blobs of memory that AFR just becomes a bottleneck as all the sane algorithms try to make multi-frame use of all the goodies brought by this functionality.Nice evil wish, Mr. Jawed. I doubt it's gonna happen. After all amd sits on the committee that makes the spec. :-/
I didn't say so, now did I?The idle usage doesn't tell you what the load usage is.
Jawed
Are GDDR5 enabled devices not at similar power loads (or less, in the case of HD 4890) as their performance competetive contemporaries?Hence my wish, to first see a decent GDDR5-implementation.
Although it won't necessarily give the full picture, the first place to start is look at the voltages of the devices. GDDR5 typically runs lower voltages than GDDR3.To make a long story short: No, I have no hard data regarding the power consumption of GDDR5 alone.
I think the TS, UAV and CS stuff all basically fragment rendering into such tiny pieces and irregular, transitory blobs of memory that AFR just becomes a bottleneck as all the sane algorithms try to make multi-frame use of all the goodies brought by this functionality.
Don't forget that while you may save some bucks on the smaller GPUs in mGPU card you're still burning much more on double memory size and more complex PCB/power/cooling solution.It is my opinion that from now on, multi-GPU solutions will always win over monolithic (assuming similar manufacturing costs of both competing products). The current situation has a lot to do with GT200 being not so good. But generally, performance scales linearly to transistor count, while the function of yields is concave. The only problem of today's multi-GPU solutions is software. Monolithic GPU means performance is guaranteed in any game, multi-GPU means living in uncertainty.
I'm quite sure that nothing will bring mGPU solutions to the level of single GPU solutions in terms of flexibility and efficiency ever.But maybe this problem can be solved by adding special multi-GPU logic that would make the whole solution work more like a single-GPU system. Yeah that sounds like Hydra... perhaps future solutions will use this "dispatch processor" model? I don't know, but I feel these are areas worth to explore.
Well, people tend to expect too much. But why isn't GT30x middle with GT200+ performance and DX10.1/11 support at $250 price point won't be a performance bomb anyway? After all that's exactly what's expected from AMD's RV870.Hmm, that's an interesting point. For AMD, this strategy proved useful. On the other side, people kind of expect a performance bomb, not a middle class.
Maybe they'll launch a middle-class GPU alongside with a SLI-on-a-stick version that would claim the performance crown and create positive publicity to bolster single-GPU card sales.
What's the problem with doing everything that AMD does PLUS doing a single big GPU AFTER you've done what AMD did? I don't see how's any kind of power usage may be a problem here.Well for that you'd need to keep power in check. It's kinda hard to see nv doing that while making an uber gpu in the first place. Why would they put constraints on themselves. gtx 285/295 is kind of a freak thing since it is not always that you can launch a chip and it's shrink 6 months later.
End of 2nd Q, start of 3rd Q was a target for GT212 as far as I remember.Shouldn't GT212 have arrived by now?
Here's a funny rumour: I've heard that NV already "shrunk" GT200 to 40G (512-bit GDDR3 remains, yeah). Sounds fishy, I know.How big is GT200b if shrunk to 40nm? Does that count as a mid-range GPU then? Surely, by 2009Q4 GTX285 performance is what we'll be calling "mid-range", so that would give us a good idea of a "mid-range" GT3xx.
What's the problem with doing everything that AMD does PLUS doing a single big GPU AFTER you've done what AMD did? I don't see how's any kind of power usage may be a problem here.Well for that you'd need to keep power in check. It's kinda hard to see nv doing that while making an uber gpu in the first place. Why would they put constraints on themselves. gtx 285/295 is kind of a freak thing since it is not always that you can launch a chip and it's shrink 6 months later.
PCB, power and cooling depend on the total power draw. Who says monolithic GPUs can't be power-hungry? Seems more like a general issue than multi-GPU-specific.Don't forget that while you may save some bucks on the smaller GPUs in mGPU card you're still burning much more on double memory size and more complex PCB/power/cooling solution.
We don't know what the cards cost to make. But, suppose HD 4770 will retail for $100 and HD 4890 will cost $200. With a $200 budget, what will you buy? The HD 4890, sure, but if CrossFire didn't suck you'd go for a couple of HD 4770's. Anyway most likely you've got a motherboard with two PCIe x16 slots and CF support even if you don't use it.DegustatoR said:Generally "one big GPU" cards are simply more effective than any mGPU solution that we saw up until today. And that's true not only in cost of producing cards, but in performance and features too.
That's one way to look at it. But the holy grail of multi-GPU tech is to make a GPU that will cover several segments alone. And if you can somehow get it to scale in all cases, even though it won't be perfect, that's exactly what you need. Of course, there's a big if about whether that can be done at all... but hey, strange stuff happens.DegustatoR said:I'm quite sure that nothing will bring mGPU solutions to the level of single GPU solutions in terms of flexibility and efficiency ever.
An interesting area for mGPU cards lies a bit higher than where AMD is putting it's RV670/770 GPUs -- let's say that you have a GPU with performance between middle and high-end class. There may be a window where you can make an mGPU card with two such GPUs which cannot be challenged with one single big GPU simply because you won't be able to make such GPU (due to technical limitations).
While I think R600 wasn't that bad in itself, it certainly was a big flop. ATI fell to the bottom, people didn't expect them to launch anything nearly as powerful as the RV770. But nVidia is in a different situation now: they have the most powerful monolithic GPU, arguably the most powerful graphics card (GTX 295). That is their reputation. Although, if they launch something, call it the GT 350, pitch a fair price and say "this is the mainstream, there's yet high-end to come", people will buy it. But doing that they either destroy GTX 200 sales, or the new card will get thumbs down for being expensive. Maybe that's why you launch high-end first. You get time to sell old stock and prepare ground for your new mainstream models. Some say that AMD could do a bigger RV770 based chip, but they don't want the RV870 (with worse performance/die size ratio because of DX11) to look weak in comparison. When ATI was launching the RV770, there wasn't a whole lot to cannibalize as there was nothing faster than RV670.DegustatoR said:Well, people tend to expect too much. But why isn't GT30x middle with GT200+ performance and DX10.1/11 support at $250 price point won't be a performance bomb anyway? After all that's exactly what's expected from AMD's RV870.
One quarter before GT300? That would be like launching 7900GTX in July 06 ~4 months ahead of 8800GTX.End of 2nd Q, start of 3rd Q was a target for GT212 as far as I remember.
Hmm, how much smaller can GT200 get and still have a 512-bit bus?Here's a funny rumour: I've heard that NV already "shrunk" GT200 to 40G (512-bit GDDR3 remains, yeah). Sounds fishy, I know.
But maybe they've decided to scrap all new GT212 in favour of this "quick'n'dirty GT200@40nm" GPU PLUS bring some GT30x middle closer to GT300 launch? I guess we'll know something more solid about what's going on in a couple of months.
Say you build a "global illumination" algorithm, you might use low refresh rates on the computations, or even stagger the computations over successive frames operating on different areas/LODs in a round robin fashion.Sorry. Didn't get it.
Power effiency of sGPU cards should always be higher than that of the mGPU card. You either should have more performance at the same power or less power at the same performance. In reality it's not that simple of course because of the differences in architectures and processes.PCB, power and cooling depend on the total power draw. Who says monolithic GPUs can't be power-hungry? Seems more like a general issue than multi-GPU-specific.
Why, that's single big GPU =)Memory, that's a different story. Yeah sure the current way is ineffective, but that's exactly what I'm talking about, we need something better than AFR, something that can take advantage of the extra memory. This is still up there for grabs.
So they'll change everything based on GT200b to this GT350 while lowering prices and adding DX11 support. I don't see how's that bad for them considering that they'll promise a high-end solution soon anyway.While I think R600 wasn't that bad in itself, it certainly was a big flop. ATI fell to the bottom, people didn't expect them to launch anything nearly as powerful as the RV770. But nVidia is in a different situation now: they have the most powerful monolithic GPU, arguably the most powerful graphics card (GTX 295). That is their reputation. Although, if they launch something, call it the GT 350, pitch a fair price and say "this is the mainstream, there's yet high-end to come", people will buy it. But doing that they either destroy GTX 200 sales, or the new card will get thumbs down for being expensive.
GT200b isn't exactly a stellar perf/mm GPU, it's on 55nm while all GT30x will use 40G and have a seriously updated architecture. I'd say that NV have much more room for improvements than AMD had with RV670->RV770 transition. After all GT212 was expected to have +30% of GT200b shader performance and GT30x middle class GPU should be near that performance position.Maybe that's why you launch high-end first. You get time to sell old stock and prepare ground for your new mainstream models. Some say that AMD could do a bigger RV770 based chip, but they don't want the RV870 (with worse performance/die size ratio because of DX11) to look weak in comparison. When ATI was launching the RV770, there wasn't a whole lot to cannibalize as there was nothing faster than RV670.
Maybe that's why we won't have a pleasure of seing GT212? It was too late to make any kind of sence.One quarter before GT300? That would be like launching 7900GTX in July 06 ~4 months ahead of 8800GTX.
Low end OEM parts (GT218, GT216) and some kind of RV740 competitor (GT215). It looks like everything higher than that will be GT30x-based.But there's still going to be some GT2xx in 40nm between now and GT300, isn't there?
That's also a possibility. Well, it's cool that we know that it is possible to manufacture and sell a 576 mm2 GPU, but how much longer can this go on?The problem of any mGPU config with some kind of shared memory lies in cost of such system compared to single GPU card. You'll eventually burn everything you gained from smaller GPUs on all this complex technology which allow them to work together in the same way a single GPU does.
When did AMD do this? People didn't want R600 anyway, so RV670 didn't cause much harm. RV770 was faster than RV670 so they didn't interfere at all (except single RV770 vs. RV670 X2 which was an exotic solution anyway). RV730 offered similar performance and features as the RV670, for most people this was a "potato, po-tah-to" situation.So they'll change everything based on GT200b to this GT350 while lowering prices and adding DX11 support. I don't see how's that bad for them considering that they'll promise a high-end solution soon anyway.
That's exactly what AMD's done two times already -- i don't understand why you think that what's good for AMD will be bad for NV.
We expect the first GT300 parts in October or November. We don't know, however, whether those will be high-end or mainstream parts. My bet stays on the high-end.You are saying everything higher than mainstream GPUs will be most likely based on GT3xx architecture, but the question is WHEN? According to latest rumours GT300 is supposed to be released at least in October or maybe even in January/February 2010. Moreover NVIDIA almost always releases highend GPUs at first so Middle-end GT3xx are going to be seen on the market even later.
Where did you hear that?Another thing is latest rumours about die size of GT300. Bigger die in 40nm than 2xGT200B die in 55nm??????!! What the hell is going on here?? It`s sick. If it`s true (but i hope is not) then what die size will have middle end parts?? 500-600 mm^2?
When comparing against HD 4850 - no. It's about 40-50 watts higher under load and I do not want to attribute all that to the GPU alone, albeit it runs at a higher voltage.Are GDDR5 enabled devices not at similar power loads (or less, in the case of HD 4890) as their performance competetive contemporaries?
As was the case with GDDR2 (FX5800 Ultra anyone?) and GDDR4, which failed to impress also. That's basically what I meant when talking about the superior power characteristics on paper.Although it won't necessarily give the full picture, the first place to start is look at the voltages of the devices. GDDR5 typically runs lower voltages than GDDR3.
Another thing is latest rumours about die size of GT300. Bigger die in 40nm than 2xGT200B die in 55nm??????!! What the hell is going on here?? It`s sick. If it`s true (but i hope is not) then what die size will have middle end parts?? 500-600 mm^2?
You are saying everything higher than mainstream GPUs will be most likely based on GT3xx architecture, but the question is WHEN? According to latest rumours GT300 is supposed to be released at least in October or maybe even in January/February 2010. Moreover NVIDIA almost always releases highend GPUs at first so Middle-end GT3xx are going to be seen on the market even later.
Another thing is latest rumours about die size of GT300. Bigger die in 40nm than 2xGT200B die in 55nm??????!! What the hell is going on here?? It`s sick. If it`s true (but i hope is not) then what die size will have middle end parts?? 500-600 mm^2?