ATI RV740 review/preview

I definitely don't dare to run FurMark on my 4890 board (w/ aftermarket cooler, 950MHz core, 1.35v GPU) anymore, as the last time the thing raped the MOSFETs up to 124°C! Crysis never got past 84°C, for the record.
I'll try the old hairy chimp soon enough.
 
What were the GPU temps at that time?

I wouldn't know if any GPU thermal safeguard would think to also check the MOSFET temperatures.
 
What were the GPU temps at that time?

I wouldn't know if any GPU thermal safeguard would think to also check the MOSFET temperatures.

MOSFETs aren't checked or have a thermal threshold.
otoh, I haven't heard anyone about the normal operating temperatures for them.
 
You definitely won't want to find out alone. ;)
I don't know for the MOSFETs, but the magnetic inductors right next to them are rated up to 120°C for continuous operation. or sort of.
 
The real world is rife with examples of things that are designed to operate with a non-sustainable maximium usage.

It allows you to use it for maximum benefit when needed but not at all times.

When a game comes out that causes this behaviour then I'll panic. But as is, it's a non-issue...

Regards,
SB

I'm sure that the real world is rife with examples of mechanical engineering with well defined design limits on actual usage. But we're talking about electronics here, where circuits should be designed to operate at a specific frequency ( or not), at the very least for the guaranteed lifetime of the product.

Singling out Furmark as if it were placing some sort of unreasonable demand on hardware resources is a silly copout. This is not an overclocking tool or a hardware diagnostic tool which is to be used by engineers only. It's just an OpenGL application.

You may be happy with hardware that can run select applications only when the moon is right but let's try remain objective. This is just not acceptable.
 
MOSFETs aren't checked or have a thermal threshold.
otoh, I haven't heard anyone about the normal operating temperatures for them.

Of course they have thermal tresholds, power MOSFETs, according to die and packaging technology, in the most of cases are rated to have maximum operating junction (die) temperatures between 125°C and 175°C (package lower, of course). Real life situations are of coruse even lower than this and this because operating temperature is related to the lifetime of the MOSFET itself.
 
Last edited by a moderator:
FYI, furmark is older than both HD 4000 and HD 3000 series and thus cannot be maliciously optimized to drive Radeons past their specs.
:yes: I 100% agree with those assertions.

http://www.ozone3d.net/benchmarks/fur/ said:
What is FurMark?

FurMark is a very intensive OpenGL benchmark that uses fur rendering algorithms to measure the performance of the graphics card. Fur rendering is especially adapted to overheat the GPU and that's why FurMark is also a perfect stability and stress test tool (also called GPU burner) for the graphics card.
...
Xtreme Burning

Xtreme Burning is a mode where the workload of GPU is maximal. In this mode, the donut is fixe and is displayed in front side which offers the largest surface. In this mode, the GPU quickly becomes very hot
...
Version 1.6.5 - February 4, 2009
Version 1.6.1 - January 30, 2009
Version 1.6.0 - January 7, 2009
Version 1.5.0 - November 23, 2008
Version 1.4.0 - June 23, 2008
...
Version 1.0.0 - August 20, 2007

release notes for 1.5.0 said:
New: added a postprocessing pass in Stability Test mode to make the test more intensive.
release notes for 1.4.0 said:
New: added an extreme burning mode for stability test.

Not exactly maliciously optimised but intentionally uses an algorithm that was known at the start to be very GPU intensive, fills most of the screen with it (possibly with intentionally excessive number of passes?) & has recent changes adding artificial intensification.
 
This is a bit o a shocker!
I'm back with results.
System: Phenom II 940 @3215MHz 1.35V, GA-MA790FX-DQ6, 2x2GB OCZ Reaper 1066; 2xHD4870 512MB @800/915; 4xHDD + 1SSD powered by Hiper 4M880 (efficiency 80%+ [peak 86% @400-700W]); Screen - Belinea o.Display24

Power measured at wall using Kill-A-Watt for entire system.
C'n'Q disabled, Vista x64 performance profile set to High Performance, Catalyst 9.4

Desktop - 335W
Chimp Demo - peak power - 368W [ GPUs are staying in low p-state 500MHz for Core with occasional spikes to 800MHz core]
Furemark 1.6 - peak power - 615W [1280x1024 full screen no AA - score:10901]

Worth noting that Furmark 1280x1024 4xAA resulted in lower power consumption - 550W peak
 
Last edited by a moderator:
Does Chimp demo have vsync on?

Allowing for 86% PSU efficiency, I make that 120W extra power draw per GPU by Furmark, compared with desktop. I expect someone will correct my maths though...

Jawed
 
Does Chimp demo have vsync on?

Allowing for 86% PSU efficiency, I make that 120W extra power draw per GPU by Furmark, compared with desktop. I expect someone will correct my maths though...

Jawed

Unfortunately yes, ATI Demos have vSync ON.
I've tried Whiteout as well - 445W peak.

Now I'm going to try Riddick: Dark Athena

...

Results for Riddick 1920x1200:
no AA, vSync ON - 427W peak
no AA, vSync OFF - 520W peak
4xAA, vSync ON - 452W peak
4xAA, vSync OFF - 509W peak

All taken at exactly same place!
What I've noticed is that looking at the complex geometry (city on the planet) is lowering power consumption to 450W in noAA, vSync OFF mode! Peak numbers were taken looking straight on ground with some foliage!
Another observation is that while for vSync OFF power consumption rises together with FPS (less complex scene = more FPS = more power), vSync ON behaves opposite to that (higher scene complexity = higher power [maintaining 60FPS]).
 
Last edited by a moderator:
Hmm, yeah that's quite an innovative way of investigating power usage and theoretically overall GPU utilisation in games. I guess you could manipulate vsync frame rates artificially, too.

I suppose it's worth noting that vsync'd framerates can drop below monitor refresh when the rendering workload becomes extreme.

Jawed
 
Can you do some power draw figures where the GPUs are not used but the CPU is pegged? Perhaps Orthos / Prime95 stress test? It would be nice to know how much power draw is used on a pegged CPU.

I know even with Speedstep / throttling disabled there's a difference on my system on power draw with cpu idle and load.
 
CPU - full load - 380W (desktop, load generated by AOD stability test)
Furmark is loading only 1 core to 100% which equals ~29% total CPU usage (4% for system and background tasks)
Riddick is loading up to 4 cores, but average CPU load is between 50% to 75%.

@Jawed:
On my configuration Riddick game is very rarely going below 60FPS with 4xAA 1920x1200 MAX settings. And for comparison sake I have only tested level where 60FPS was all the time :)
I did check what happens when workload becomes extreme - power consumption is decreasing again as FPS drops (in that case because of vSync ON FPS dropped from 60 to 40).
 
Last edited by a moderator:
Now, if you were a pretender for Dave Baumann's old crown you'd investigate versus a single HD4870 and do a complete sweep across resolutions for a nice fillrate-normalised power-graph :p

Furmark loading the CPU may not even be stressing the CPU in any meaningful way. I haven't investigated this much, but it seems to me that any kind of GPU-limited workload seems to get the CPU core driving the GPU running at 100% - I can't help thinking that's "busy waiting" type workload. Theoretically not exactly stressing the CPU. Dunno really, though. Also might be an ATI thing...

Jawed
 
Now, if you were a pretender for Dave Baumann's old crown you'd investigate versus a single HD4870 and do a complete sweep across resolutions for a nice fillrate-normalised power-graph :p

Furmark loading the CPU may not even be stressing the CPU in any meaningful way. I haven't investigated this much, but it seems to me that any kind of GPU-limited workload seems to get the CPU core driving the GPU running at 100% - I can't help thinking that's "busy waiting" type workload. Theoretically not exactly stressing the CPU. Dunno really, though. Also might be an ATI thing...

Jawed

Well, at some point maybe ;)
Problem is, that I have job to do which takes 50-70h a week. Beside I would need to invest in nice power meter connected to computer for logging purpose to ease my pain. Let's wait for a bank holiday!

Regarding CPU Load I forgot to mention that 1 core loaded by Cinebench = 347W total power consumption.
 
What are the relative performance drops in each case for turning on MSAA?

Also, I guess it's worth mucking about with forced wide-tent and edge-detect MSAA and super-sampling + MSAA on NVidia.

Jawed
 
Back
Top