AMD RV770 refresh -> RV790

I find the newer ATI GPU extremely power hungry. I wanted to jump into a HD4870 but was held back by the crazy idle draw! Is there a particular 4870 brand that has good power management or can someone share best method to underclock and undervolt a stock 4870 for low idle draw? I do not want a nvidia as i am building a Dragon platform. I could go for a 4850, at almost half the price and good idle but i think this model will not last me 2 years.
 
I find the newer ATI GPU extremely power hungry. I wanted to jump into a HD4870 but was held back by the crazy idle draw! Is there a particular 4870 brand that has good power management or can someone share best method to underclock and undervolt a stock 4870 for low idle draw? I do not want a nvidia as i am building a Dragon platform. I could go for a 4850, at almost half the price and good idle but i think this model will not last me 2 years.

Well the (reference) 4890 has lower idle draw than 4870 ;)
But anyway, according to some interview the high idle draw of 4870 is caused by problems with GDDR5 if they downclock the core too much, apparently they solved this with 4890 which goes down to 240MHz on core.
 
A third party application, like AtiTrayTools, is capable of managing the GPU voltage for every reference 4870 board (not sure about 4850), allowing to create custom over/lower clock profiles for both the GPU and the memory. With 450MHz @ 1.083v for the GPU in 2D mode, the board is stone cold. ;)
There should be 4890 support soon enough, let's hope.
The other option here is through modifying the video BIOS (if you're not stuck to Windows).
 
Polish PCLab.pl did power consumption test with memory underclocked to 450MHz in CCC.

mocidle.png


They also managed 1030MHz core on ASUS board with some voltage tweaking ;)
 
With my personal HD 4870 from powercolor, I've measured a drop of 15 to ~40ish watts (vga alone) in Idle-power when only downclocking the VRAM to 450 MHz.
 
What I got from Anand is Physx is useless/doomed. Havok is the future. Physx is only supported by 1 vendor (thus wont be widely supported in games), Havok will be supported by all 3.

Well you said it yourself. PhysX "is" and Havok "will be". There is no Havok so it's really a moot point. There's nothing stopping PhysX from running via OpenCL if the competition heats up.
 
Well you said it yourself. PhysX "is" and Havok "will be". There is no Havok so it's really a moot point. There's nothing stopping PhysX from running via OpenCL if the competition heats up.

If PhysX, as I hope, will run sooner or later on OpenCL, it will stop to be an asset for NVidia cards, and this could probably mean the defeat of the whole CUDA concept.
GPU computing, in every form, needs a standard middleware to really become alternative or at least complementary to CPU computing, imho.
 
CUDA was always an interim solution. nVidia is just going to have to compete on performance the same way they have to with OpenGL and D3D applications.
 
No, CUDA will definitely be around for a while. CUDA and OpenCL are not mutually exclusive.

CUDA is an architecture and API. Even if OpenCL would become dominant, Nvidia could easily replace C for CUDA with it as a front-end. Just like ATI will use OpenCL on top of CAL to replace Brook+.
 
Or it will be like OpenGL on top of Glide, as a comparison for 3dfx nutjobs : you had many OpenGL implementations, sitting either on glide2x (miniGL, early wickedGL) or glide3x (later wickedGL, 3dfx ICD, MesaFX)
 
No, CUDA will definitely be around for a while. CUDA and OpenCL are not mutually exclusive.

Well, it's a bit like Cg vs. HLSL / GLSL. Cg is still around, but not something you'd use unless you have a specific reason to so do.
 
Or it will be like OpenGL on top of Glide, as a comparison for 3dfx nutjobs : you had many OpenGL implementations, sitting either on glide2x (miniGL, early wickedGL) or glide3x (later wickedGL, 3dfx ICD, MesaFX)

A very emasculated version of OGL on top of Glide. Nvidia would hope to not follow that path if they want to succeed.

On top of that, if it follows in Glide's footsteps, as soon as OpenCL or the DX equivalent becomes as powerful or more powerful, then why in the world would any programmer use it?

Also isn't CUDA a higher language than CAL? In other words, CAL is a low level language and thus isn't very similar to OpenCL which is a high level language. Meanwhile CUDA and OpenCL are basically similar in that they make things easier for the programmer by not having to worry about the low level stuff for direct interaction with the hardware. So if CUDA is (at least at the moment) tied to one vendors hardware, while OpenCL is not tied to one vendors hardware, then...

Why would a programmer bother to program in both CUDA and OpenCL when you would presumably be able to do everything in OpenCL and thus not have to have two different code paths?

That's why Glide died. It was basically the same as OpenGL (although more limited) and later Direct3D. Except OpenGL and Direct3D weren't tied to any one vendor's hardware. And neither were they controlled by any one hardware vendor.

I just have a hard time seeing CUDA survive in the consumer space if it remains limited to one vendor.

I can, however, see it surviving quite well in the HPC market where you have high priced applications developed for a very limited number of customers. Customers who presumably have the hardware...and then have the software developed for it.

Regards,
SB
 
Also isn't CUDA a higher language than CAL?

C for CUDA is. But Nvidia is making the distinction between CUDA the architecture and "C for CUDA" which is what I guess you're referring to. Based on their "new" framing CUDA is the technology that they will use to support DX Compute, OpenCL, C for CUDA etc, etc.
 
Why would a programmer bother to program in both CUDA and OpenCL when you would presumably be able to do everything in OpenCL and thus not have to have two different code paths?
CUDA, being hardware-dedicated, can advance more rapidly - gaining features significantly ahead of OpenCL.

Though it can be argued that the features being added to OpenCL and CUDA over the next few years are, effectively, all "well understood" right now. It's just version 1.0-itis that means they're not all in there. So there's no particular reason for OpenCL to be laggardly, especially as x86 is a target.

And then there's the Larrabee factor. Why program OpenCL when...

Jawed
 
C for CUDA is. But Nvidia is making the distinction between CUDA the architecture and "C for CUDA" which is what I guess you're referring to. Based on their "new" framing CUDA is the technology that they will use to support DX Compute, OpenCL, C for CUDA etc, etc.

AH, ok. Thanks for clearing that up for me. Makes more sense now. :)

Regards,
SB
 
No, CUDA will definitely be around for a while. CUDA and OpenCL are not mutually exclusive.

No they certainly aren't mutually exclusive, OpenCL just makes CUDA obsolete.

CUDA is an architecture and API. Even if OpenCL would become dominant, Nvidia could easily replace C for CUDA with it as a front-end. Just like ATI will use OpenCL on top of CAL to replace Brook+.

CUDA is not an architecture. It is merely an API and programming model. The exact same thing that OpenCL is: an API and a programming model.

And I doubt that either Nvidia or ATI want to make it so that their OpenCL stack goes through yet another stack that then finally get translated into hardware instructions.
 
C for CUDA is. But Nvidia is making the distinction between CUDA the architecture and "C for CUDA" which is what I guess you're referring to. Based on their "new" framing CUDA is the technology that they will use to support DX Compute, OpenCL, C for CUDA etc, etc.

Sounds like a bunch of marketing to me.
 
Sounds like a bunch of marketing to me.

Do you have a better name for the underlying tech that drives all these api's or do you honestly believe there's going to be a custom software stack for each of them? Marketing isn't evil when it makes sense. Sheesh.
 
Back
Top