AMD RV770 refresh -> RV790

Yes, but intentionally designed to stress the GPUs every part at 100%, while same "products" (ie, same gfx etc) could be achieved with normal workloads too.
Well, in the first place it was/is a benchmark (and one the Radeons win hands down).

I have yet to see marketing guys from either company spring up and say, 3DMark Vantage was not a real application and [insert preferred choice of lame excuses].

It's also the first benchmark I have seen, that a company wants to loose on purpose, since that's what's happening since around Catalst 8.8 or so.

Funny, purely fictive scenario: Tomorrow Viva Pinata 2 goes public, this time with (more) real(istic) fur instead of tesselation heating up your GPU to the max. What's going to happen then? Artificial performance drop in upcoming drivers?

And that's the whole point: Who guarantees, that something like this doesn't happen?
 
No kidding:

idle - 150w
vantage extreme - 270w
furmark burn - 320w :oops:

Those are for the entire machine but 50w more is impressive.
 
Yes, but intentionally designed to stress the GPUs every part at 100%, while same "products" (ie, same gfx etc) could be achieved with normal workloads too.

Normal workloads do the same thing. All furmark shows is that nvida and ati did poor power/thermal testing.
 
Sounds like a bit of security concern as well. Like back when CPUs didn't throttle properly in response to overtaxing workloads and compromised systems could literally be irreparably physically damaged from across the world.

Not that I really expect the next big worm out there to specifically target overheating the graphics processors in the systems.
 
I assume you have at least some evidence for this?

what the poor thermal testing or that normal workloads do the same thing? There are plenty of other programs that push ATI/Nvidia out of spec, Furmark is only remarkable in that it pushes them so far out of spec.

As far as the power/thermal testing, I think that is something that is pretty self evident.
 
The workloads, I'd be curious to see any other software besides Furmark that actually pushes them over PCIE specs and/or TDP
 
Theo can say whatever he wants, those 8+8 cards would never get PCI SIG approval (due going past the specifications) on top of them, and thus could never be marketed as PCI Express products

Wouldn't both a 6+8 and 8+8 4890X2 card still be in spec? From what I recall (and please excuse me if my memory is faulty), the 6 pin PCI-E connectors are rated at 75w max while the 8 pin PCI-E connectors are rated for 150w max. And for the PCI-E slots, 1.0 and 1.1 can supply 75 watts max while the 2.0 version can supply 150w.

So even if you had a PCI-E 1.0 board with an 8+8 card installed, you're looking at 75w (board) + 150w (PS) + 150w (PS), or 375w max. The 6+8 version would be limited to 300 w draw.

The current 4870X2 has only a 6+8 connector configuration. And a single 4890 draws less power under load (approx 10 watts) than a 4870. So I could easily see a 6+8 4890x2 2GB and 8+8 overclocked 4890 4GB being within PCI-E specs.
 
Perhaps it's more of a political reason? That ATi doesn't want to have the most powerhungry card on the market, because it would have a negative impact on the lean-and-mean image they have set with their graphics cards?
 
Wouldn't both a 6+8 and 8+8 4890X2 card still be in spec? From what I recall (and please excuse me if my memory is faulty), the 6 pin PCI-E connectors are rated at 75w max while the 8 pin PCI-E connectors are rated for 150w max. And for the PCI-E slots, 1.0 and 1.1 can supply 75 watts max while the 2.0 version can supply 150w.

So even if you had a PCI-E 1.0 board with an 8+8 card installed, you're looking at 75w (board) + 150w (PS) + 150w (PS), or 375w max. The 6+8 version would be limited to 300 w draw.

The current 4870X2 has only a 6+8 connector configuration. And a single 4890 draws less power under load (approx 10 watts) than a 4870. So I could easily see a 6+8 4890x2 2GB and 8+8 overclocked 4890 4GB being within PCI-E specs.

6+8 is in spec, 8+8 isn't.
The rumor of PCIE 2.0 slot providing 150W was false, it's still 75W like 1.0 slot and the max allowed power is 300W (6+8+slot, 75+150+75)
 
The artricles I've seen indicate the PCI-e spec has revisions for each of the additional power plug additions, 6 pin connectorm and 6 and 8 pin connectors.

Is there a revision for two 8 pins, or are we going to see ATI and Nvidia pushing another revision?
 
Lean-and-mean image? Seems like just about everybody knows the 4870+s suck power like Dracula even when idling. Regardless of whether or not this is the case, it's what the majority of reviews show.
 
Lean-and-mean image? Seems like just about everybody knows the 4870+s suck power like Dracula even when idling. Regardless of whether or not this is the case, it's what the majority of reviews show.

Well people on this forum mainly seem to praise ATi's great performance-per-mm2 ratio.
Which I suppose means lean-and-mean.
They make nVidia look like overly complex and large GPUs, requiring bigger, more powerhungry cards.
 
6+8 is in spec, 8+8 isn't.
The rumor of PCIE 2.0 slot providing 150W was false, it's still 75W like 1.0 slot and the max allowed power is 300W (6+8+slot, 75+150+75)

So the main stumbling block is that even though the power draw would be within spec for both the slot and power connectors, it's invalid because the current PCI-E standards don't recognize 8+8 as a valid configuration? Sounds more like a technicality.

As long as the card doesn't exceed max draw on any of the connectors, I would think it would get PCI-E approval. Unfortunately, as I'm not a PCI-SIG member, I can't download the spec papers to read just how they spell out what is allowed and what isn't.
 
So the main stumbling block is that even though the power draw would be within spec for both the slot and power connectors, it's invalid because the current PCI-E standards don't recognize 8+8 as a valid configuration? Sounds more like a technicality.

Well, it might be troublesome with PSUs that follow PCI-e specs, and therefore offer pairs of 6+8 connectors, so you would run out of 8-connectors too soon, and be left with 6-connectors unused.
 
That's IMO a secondary concern compared to marketing something that's not adhering to an open industry standard.

Many and not only mGPU-ready PSUs already feature more than one 8-pin cable.
 
Please refrain from posting "Sky is falling" stuff when it can be avoided with ease. There's a large difference from "dropping Catalyst support" to "dropping FGLRX support in Linux".
 
Back
Top