Sure, with the subsequent driver update.Right but if I write Furmark2.0.exe does it still get the workaround?
You know, it's not such a bad analogy... Overclocking would be akin to exceeding the redline & in effect, speed margining the engine. These days you'd have to circumvent the rev limiter, of course. But as for software just using the machine, well anyone can just use 1st gear or WOT all the time, but that wouldn't fit the normal distribution of usage patterns. I love analogies.That analogy would apply to overclocking... maybe. It doesn't apply at all to software just using the machine.
Well fine, but then lets have AMD releasing a guide with their SDK that states that you can't do X/Y/Z on their cards... oh and probably forfeit a lot of good faith with Microsoft since I'm pretty sure Furmark is valid DirectX codeBut as for software just using the machine, well anyone can just use 1st gear or WOT all the time, but that wouldn't fit the normal distribution of usage patterns. I love analogies.
Uhh, In most places you're taught and tested on that before you're even allowed to drive. This analogy is getting ridiculous though... it's not helping understand anything about the real subject at hand.Heh, does your owner's manual explicitly state to not drive exclusively in 1st gear or with your foot constantly planted on the accelerator? Documentation isn't always enough.
If there is indeed hardware monitoring then fine, it satisfies the constraint that I shouldn't be able to write valid software that breaks it. My understanding was that this isn't the case though and instead they are relying on non-100%-robust application detection to catch these "corner cases".Look at it like "there's nothing for free in 3D" esp wrt perf/mm in their 4k series. They addressed their oversight via hardware monitoring for such corner cases in the 5k series.
My understanding was that this isn't the case though and instead they are relying on non-100%-robust application detection to catch these "corner cases".
Well isn't that dandy, everyone must be great drivers.... I can willfully drive like a ninny or be a dyno freak, putting my engine at greater risk of failure. Just like an undergrad passing a degree, but being a sloppy or malicious coder or writing "corner case" stress testing code...Uhh, In most places you're taught and tested on that before you're even allowed to drive.
That's the point. I don't know any games where workload distribution is anything like these utils. Does it matter? Maybe. But it's been addressed.This analogy is getting ridiculous though...
My understanding was that this isn't the case though and instead they are relying on non-100%-robust application detection to catch these "corner cases".
http://www.anandtech.com/video/showdoc.aspx?i=3643&p=11Anandtech R8x0 review said:until Catalyst 9.8 they detected the program by name, and since 9.8 they detect the ratio of texture to ALU instructions (Ed: We’re told NVIDIA throttles similarly, but we don’t have a good control for testing this statement). This keeps RV770 safe, but it wasn’t good enough. It’s a hardware problem, the solution needs to be in hardware...
For Cypress, AMD has implemented a hardware solution to the VRM problem
The analogy doesn't apply at all... all you're proving is how inappropriate it is. Lets give it up already!Well isn't that dandy, everyone must be great drivers.... I can willfully drive like a ninny or be a dyno freak, putting my engine at greater risk of failure. Just like an undergrad passing a degree, but being a sloppy or malicious coder or writing "corner case" stress testing code...
Ah cool, thanks for the link. Looks like Anand agrees completely with what I've been saying, but it appears that they have indeed solved this properly in the 5k series - sweet Now if they would just stop calling these applications "power viruses" or other such nonsense and accept that they are legitimate - albeit rare - workloads that the card has to handle
Now if they would just stop calling these applications "power viruses" or other such nonsense and accept that they are legitimate - albeit rare - workloads that the card has to handle
The analogy doesn't apply at all... all you're proving is how inappropriate it is. Lets give it up already!
Actually the analogy is quite perfect.
Furmark's only purpose is to allegedly test the performance boundaries of a graphics card. That is no more or less valid than driving a car around at redline to test the performance boundaries of a car.
In the car case you're guaranteed to ruin your car if you do this extensively. Similar to how (if there's no software/hardware checks in place) you could theoretically do the same using furmark.
The assumption is that people are smart enough not to run at redline constantly, and smart enough not to run furmark constantly.
As reality has proven however. There are in fact people not smart enough in both cases.
And just like there are real applications (0) that I can think of that has a car running at redline constantly, there are are real applications (0) that I can think of that has a program running furmark code constantly.
In fact, code like that is far more common for CPUs than GPUs as it's far easier to saturate a CPU. CPU's also have a history of this and thus safeguards were implemented over time.
GPUs so far have a history of 1. And somehow people find it odd that safeguards weren't put into place before there was anything to show a need for such a safeguard?
Once the discovery was made, it was only a matter of time before safeguards were put into place...
I'm going to assume that in this case Nvidia ran into the power virus problem during design and validation of one of their GPUs in the past or that there was some communication between the the writers for Furmark and Nvidia through devrel or somesuch. While ATI hadn't.
Either way an extreme corner case that has thus far not had even the slightest miniscule mirror in actual applications.
Regards,
SB
Doesn't matter... define "real application"? Furmark is a "real application"; what if I like watching the pretty pictures that it generates? Why can't I use it as my screen save? What about the "real" volume renderer in another thread here that was overheating cards when run for too long? Oh I'm sure that's not "typical" either? I don't give a damn what AMD thinks is typical, I want to run whatever I want on the card and not be concerned about it breaking. I have no such concerns on any NVIDIA cards or any CPUs and thus if AMD is unwilling to make similar guarantees then their cards are patently behind the times in this respect.And just like there are real applications (0) that I can think of that has a car running at redline constantly, there are are real applications (0) that I can think of that has a program running furmark code constantly.
That's fine - good even! After reading even a little about the tech in the 5k series, I'm completely happy about the solution. So let me reiterate that the only problem here is AMD downplaying it as anything less than hardware not working properly. It is *not* the application's fault, it was the hardware, and they fixed it. Great, lets move on nowOnce the discovery was made, it was only a matter of time before safeguards were put into place...
Unclear - a proven stability problem like this casts doubt on the source of other stability problems in the past. But the point, again, is that they have fixed what was a problem with previous hardware, no matter how "rare" it was or what AMD thinks of the programs that reveal it.Either way an extreme corner case that has thus far not had even the slightest miniscule mirror in actual applications.
Doesn't matter... define "real application"? Furmark is a "real application"; what if I like watching the pretty pictures that it generates? Why can't I use it as my screen save? What about the "real" volume renderer in another thread here that was overheating cards when run for too long? Oh I'm sure that's not "typical" either? I don't give a damn what AMD thinks is typical, I want to run whatever I want on the card and not be concerned about it breaking. I have no such concerns on any NVIDIA cards or any CPUs and thus if AMD is unwilling to make similar guarantees then their cards are patently behind the times in this respect.
The problem here has never been the existence of a hardware bug. The problem is/was AMD downplaying and even denying that it was anything other than a problem with the hardware. I doubt NVIDIA would respond to that bug (which appears to be more of a chipset thing BTW) with "oh, well... try not to run new 3D applications with those drivers installed... the cards weren't really made for that. Just play Quake 3 - we tested that a bit".Anyway, I would be careful to say that "other cards" have not such problems, as this other thread