NVIDIA Kepler speculation thread

No need to guess, Kaotik posted some GPGPU benchmarks over in the 7970 thread, that includes results from a bitcoin miner:

http://muropaketti.com/artikkelit/naytonohjaimet/gpgpu-suorituskyky-amd-vs-nvidia,2 (2nd benchmark)

It isn't on "absolute stink" level, but still disappointing. If GK110 adds the missing instruction on full speed, it should beat a 7970.

For bitcoin mining it does stink, very badly. The 6670 I have in the basement rig could do 110+ Mhashes/s at less than 30W@ stock clocks.
 
I'd be pretty surprised if GK110 didn't have the same ISA as GK104.
But as GK104 does 32bit shifts only at the speed of double precision, the step from 1/24 to 1/2 could help quite a bit, even if there is some more than just shifts in it.
 
How is performance in a set of instructions a GPU can perform completely unrelated to GPU architectures and chips?
 
AlphaWolf said:
How is performance in a set of instructions a GPU can perform completely unrelated to GPU architectures and chips?

Programmable processors can execute arbitrary sets of instructions. For instance, you could compile a web server to run on a GPU. But that would be unrelated to the workloads a GPU is useful for, and so I would say web server performance is unrelated to GPU architecture.

Bitcoin mining is similarly unrealistic as a workload for assessing the strengths and weaknesses of a GPU architecture. It's a strange application that doesn't predict performance well for other applications that are more useful on GPUs. And since FPGAs are so much more efficient at Bitcoin mining, I see no reason to care about GPU performance. Conversely, the people that care about Bitcoin mining don't care about GPUs either.

Which is why I think Bitcoin mining is irrelevant to GPU architecture.
 
And since FPGAs are so much more efficient at Bitcoin mining, I see no reason to care about GPU performance. Conversely, the people that care about Bitcoin mining don't care about GPUs either.

Which is why I think Bitcoin mining is irrelevant to GPU architecture.
Have you actually read the Bitcoin forums? GPUs are a far larger market than FPGAs simply because they are easier to setup and they are useful for other tasks. Yes, some FPGA solutions may be more efficient in perf/w but in most cases fast GPUs will pay off their investment faster as they are much faster in absolute performance.

Also, perf/w isn't important to everyone.

-FUDie
 
Programmable processors can execute arbitrary sets of instructions. For instance, you could compile a web server to run on a GPU. But that would be unrelated to the workloads a GPU is useful for, and so I would say web server performance is unrelated to GPU architecture.

Bitcoin mining is similarly unrealistic as a workload for assessing the strengths and weaknesses of a GPU architecture. It's a strange application that doesn't predict performance well for other applications that are more useful on GPUs. And since FPGAs are so much more efficient at Bitcoin mining, I see no reason to care about GPU performance. Conversely, the people that care about Bitcoin mining don't care about GPUs either.

Which is why I think Bitcoin mining is irrelevant to GPU architecture.

That might have been true 10 years ago. GPU's are now being leveraged to do other things and it might not always be that useful, but it's hardly unrelated to graphics architecture.
 
The nature of bitcoin mining algorithm such that it is not very useful to consider it to avaluate an architecture.
Really? It helps to measure integer performance, does it not? What about hashCat? Does that app not count either just because AMD's GPUs are so much better than nvidia's?

Other test may stress different parts of the chip such as caches or float/double performance, but that doesn't make Bitcoin or hastCat any less useful in providing information about the chip being tested.

-FUDie
 
There is some talk about nvidia's lack of BFI_INT and int rotate functions that makes them significantly slower than AMDs..

Offtopic, but im not part of this forum, the result posted on BSN for crypto look really really strange.

BSN results posted:




My result and similar to other reviews. ( mine are a bit higher can due to the rest of the system. )



23.563 and 21.531. I dont know what bug have got BSN for get only 1.8Go/s for the encryption/decrypt. when other review show 10x more.
 
Offtopic, but im not part of this forum, the result posted on BSN for crypto look really really strange.

BSN results posted:

http://img10.imageshack.us/img10/8083/sisoftsandra2012cryptog.jpg

My result and similar to other reviews. ( mine are a bit higher can due to the rest of the system. ) 23.563 and 21.531.
I dont know what bug have got BSN for get only 1.8Go/s for the encryption/decrypt. when other review show 10x more.
I obviously varies a bit with driver version and used API (DX11 CS vs. OpenCL), but I guess one can safely ignore BSN's results. In the link Kaotik posted one also finds this:

X-20120413172552126154.png
X-20120413172554312160.png
 
Really? It helps to measure integer performance, does it not? What about hashCat? Does that app not count either just because AMD's GPUs are so much better than nvidia's?

That depends on what you define as "integer performance." For starter, it's completely parallel with almost no inter-thread activity. Of course, that's not unheard of, but it's certainly rare (another example is password cracker).

Other test may stress different parts of the chip such as caches or float/double performance, but that doesn't make Bitcoin or hastCat any less useful in providing information about the chip being tested.

The problem here is, IMHO, if Bitcoin ever gets really useful, people will use FPGA/ASIC based mining devices instead of GPU. Performance per watt counts, especially here. That's because the cost of mining Bitcoin is, other than the initial cost of buying the equipment, basically electricity. The market dictates that the price of Bitcoin will eventually be very close to the cost of mining Bitcoin (and, when 21 million Bitcoin are all mined, the transaction fee). Therefore, any method of higher cost is going to be obsolete. Therefore, it's not really that important to know which GPU is faster at mining Bitcoin, if that'd happen.

I'll give a similar example: back then when 3D cards were still very expensive, we see a lot of CPU benchmarks of software 3D engines (such as the original Unreal engine). They were important because a lot of people were running 3D games with software renderer. However, these days no one care about that anymore, because no one runs their 3D games with software renderer. That could still be an interesting benchmark, but not very relevant to other real world applications.
 
that doesn't make Bitcoin or hastCat any less useful in providing information about the chip being tested.

Well it's actually pretty useless. BitCoin performance only tells you how fast the chip executes essentially one instruction, one that isn't prevalent in the workloads the chip is designed for. The real question is should we care about BitCoin performance. For people with jobs I suspect the answer is no.
 
If nothing else it'll probably help AMD sell some cards, so the answer is "yes"

I mean who decides what a viable workload is, if people use it for that, it's a relevant workload.
 
That depends on what you define as "integer performance." For starter, it's completely parallel with almost no inter-thread activity. Of course, that's not unheard of, but it's certainly rare (another example is password cracker).
Communication will slow down the computation, no doubt. But that does not change the fact that GK104 obviously has a quite low integer raw performance (only integer adds are reasonably fast). In typical gaming and a lot of GPGPU workloads, this may not matter much, but cryptography or also some other mathematical problems are valid application fields of GPGPU.
The problem here is, IMHO, if Bitcoin ever gets really useful, people will use FPGA/ASIC based mining devices instead of GPU. Performance per watt counts, especially here. That's because the cost of mining Bitcoin is, other than the initial cost of buying the equipment, basically electricity. The market dictates that the price of Bitcoin will eventually be very close to the cost of mining Bitcoin (and, when 21 million Bitcoin are all mined, the transaction fee). Therefore, any method of higher cost is going to be obsolete. Therefore, it's not really that important to know which GPU is faster at mining Bitcoin, if that'd happen.
That is a misconception in my opinion. Newer hardware is going to provide higher performance/Watt. What is cost effective today, won't be cost effective tomorrow as the needed effort for mining new bitcoins rises. Therefore at some point you need to replace the hardware with something else to keep the mining profitable. That also means, there is a limited life span a piece of hardware can be used. For that simple reason, the initial cost of buying the hardware of course plays a role. If a certain piece of hardware costs less to buy and achieves a higher absolute performance, you can tolerate a worse performance/W and still reach a higher profit.
 
I mean who decides what a viable workload is, if people use it for that, it's a relevant workload.

I can use my toaster to bake chicken too. It might be a relevant workload for me but it doesn't tell me how good of a toaster it is.

The market dictates that the price of Bitcoin will eventually be very close to the cost of mining Bitcoin (and, when 21 million Bitcoin are all mined, the transaction fee)

The market dictates that the long-term price of bitcoins will depend on the supply of bitcoin (fixed) versus demand. Demand is driven by its usefulness as a currency. That's what makes the whole thing a joke. The speculators should ride the hype train while it's rolling but there's a cliff waiting at the end of the tracks.
 
Back
Top