NVIDIA Kepler speculation thread

Also, once you use btc mining to pay off your GPU, you can use it to game and what not, or even just resell it on ebay. If you use an FPGA board and pay it off, you get....

That assumes you can pay off your GPU. If more people use FPGA/ASIC to mine Bitcoin, you wouldn't even be able to pay back your electricity bill for your effort.

I guess I wasn't getting my point very clearly, so I'll try again. Right now, Bitcoin is not very useful for most people. It's still a very niche thing. So one can argue that Bitcoin mining performance is not really that important because not many people are actually doing that (but at least some are, of course).

However, IF Bitcoin ever gets really useful, then a lot of people will start mining, and it's sort of the vision of Bitcoin's advocates (they said something like making an electric heater producing heat by mining Bitcoin, a win-win situation!). However, if that happens, using GPU to mine Bitcoin will become obsolete, thus no longer important, because FPGA or ASIC will certainly replace them. The upfront cost is really not an issue, because if Bitcoin is useful, it will be a big enough market to justify the initial R&D cost.
 
Is this meant to be 690 (as some gaffers say) or just 670Ti etc?? I assume the latter, am I wrong?

Funny, they may get the salvage parts out while the 680 itself still isn't available :LOL: That's gotta be a first.

GF104 came out as a salvage part only, so I would say it is one step better for NV this time.
 
However, if that happens, using GPU to mine Bitcoin will become obsolete, thus no longer important, because FPGA or ASIC will certainly replace them. The upfront cost is really not an issue, because if Bitcoin is useful, it will be a big enough market to justify the initial R&D cost.
I'd agree that an FPGA or ASIC could beat a GPU in terms of efficiency.
An FPGA I can see potentially justifying itself, maybe.
Anecdotal accounts are that a few people have been able to justify AMD GPU bitcoin mining, at least in terms of paying for the electricity to run things and making some profit.
External realities such as exchange rate changes, liquidity issues, and the loss of one of the larger exchanges may have changed things since then.

I don't think many ASIC firms are going accept being paid in bitcoins, though. And in this regard, an FPGA or GPU has an advantage in that bitcoin mining can be subsidized multiple nodes beyond what is affordable for a niche chip manufacturer because those chips can do something everybody is willing to pay actual money for.
The present total number of bitcoins at a $5 per BTC amounts to ~$45 million, which if we believe Globalfoundries won't pay for the design costs at 28nm.

Since bitcoin mining will increase in difficulty the higher the aggregate output, the actual number of bitcoins an individual can expect to pull in once an ASIC is generally used is going to trend to the previous mean.
That's some massive up front cost for the opportunity to earn just about as much as you were getting before.
 
It seems to me there's about as many cards bitcoin mining (can only be guesstimated, though) as there are cards running folding@home - roughly 30,000 of each.

The new version of WinZip has OpenCL support for GPU acceleration, perhaps that's a more relevant "integer benchmark"?
 
The new version of WinZip has OpenCL support for GPU acceleration, perhaps that's a more relevant "integer benchmark"?

I'm actually very interested of this one, but right now I don't have access to an AMD GPU (I only have one in my home). Generally LZ compression/decompression is very difficult to parallelize, so it's interesting to see how much improvement a GPU can do.
 
I was going to test this, but I ended up making a decision never ever to touch winzip, even with a ten foot pole, ever again.

Before installation it wanted to make winzip your homepage and searchengine and install a browser bar, then install kaspersky scanner, after the "normal installation" you had to manually click associations and one-by-one remove all the associations, after that it said that you can't use it properly without the associations, then it wanted to install some zip-courier-"safer-email-now"-thingy too.
After finally starting the app up, it again complained that I removed it's associations, after first unzipping it asked if my pc was running optimally, and if I'd like them to scan it.
Oh, and they try to sell you "free winzip by just completing one of these offers", at least evaluation version has adverts etc.
 
In the worst case, WinZip uses OCL just as a framework to split the work over the cores of the CPU :rolleyes:. AFAIK, older versions used only a single core. Someone over at 3DC tested it and saw a reduction of the compression time (~60% faster), but also more CPU usage (from 1 thread to ~75% CPU utilization on a Core i7-920 with a HD5870).
 
Although Charlie claims NV have both

wafer supply issues AND crappy yields

Link

I somehow begin to wonder how bad actually their situation is. One month later (after the launch) and they are still unable to get decent amount of chips. And that is with a relatively small chip.

Thoughts? :???:
 
If true that could be just to cash in on the supply versus demand for GK104.

Since they likely don't want to raise the price of GTX 680 after the fact, they could just offer a dual GPU card with GK104 at twice the price. So 999 USD (or more) for a GTX 690 perhaps? :)

Regards,
SB
 
Charlie stopped being a half-credible source 2 months ago , his records are sealed right now .

Charlie's credibility seems inversely related to how well he talks about Nvidia. When he was pumping GK104 for that little spell there he suddenly became the world's most reliable source on forums it seemed. Over at Hardocp people would be saying stuff like "Charlie proved right again" about minor things...even though as I pointed out, he was still getting almost everything wrong (like, release dates) :p
 
If true that could be just to cash in on the supply versus demand for GK104.

Since they likely don't want to raise the price of GTX 680 after the fact, they could just offer a dual GPU card with GK104 at twice the price. So 999 USD (or more) for a GTX 690 perhaps? :)

Regards,
SB


Yup, economics says Nvidia not pricing GK104 at 599 or more left a lot of money of the table. Money that probably now goes to the retailers pockets charging 579 or ebay.

Not near as bad as Nintendo who left who knows how much on the table for like the first 2 years of Wii :p

AMD must be cleaning up in most segments right now with a top to bottom 28nm lineup that's actually fully available. Well, maybe now that they adjusted 7900 pricing to be reasonable...
 
AMD must be cleaning up in most segments right now with a top to bottom 28nm lineup that's actually fully available. Well, maybe now that they adjusted 7900 pricing to be reasonable...
Well, from a business standpoint, that really depends mostly upon OEM wins. I really don't know how the two IHV's are doing there, but the excellent power consumption of the GTX 680 seems to suggest good things for future laptop parts.

As far as the AIB market is concerned, however, I'd be a bit surprised if AMD's sales hadn't slowed a bit since the GTX 680 benchmarks were put out there, given that the GTX 680 is superior to AMD's offerings in most ways.
 
Charlie's credibility seems inversely related to how well he talks about Nvidia. When he was pumping GK104 for that little spell there he suddenly became the world's most reliable source on forums it seemed. Over at Hardocp people would be saying stuff like "Charlie proved right again" about minor things...even though as I pointed out, he was still getting almost everything wrong (like, release dates) :p

Well despite everything Charlie is or isn't his article on Kepler in January was fairly accurate.

http://semiaccurate.com/2012/01/19/nvidia-kepler-vs-amd-gcn-has-a-clear-winner/

Back then no-one believed that GK104 would be a contender against Tahiti. Everybody pitted it against the Pitcairn. Long after that article the 256bit mem bus etc. were brought up. There was supposed to be no way it could compete... So he at least got that somewhat right.
 
Big-K is a 7 billion transistors GPU:
S0642 - Inside Kepler
Stephen Jones ( CUDA Developer, NVIDIA )
Lars Nyland ( Senior Architect, NVIDIA )

In this talk, individuals from the GPU architecture and CUDA software groups will dive into the features of the compute architecture for “Kepler” – NVIDIA’s new 7-billion transistor GPU. From the reorganized processing cores with new instructions and processing capabilities, to an improved memory system with faster atomic processing and low-overhead ECC, we will explore how the Kepler GPU achieves world leading performance and efficiency, and how it enables wholly new types of parallel problems to be solved.
https://registration.gputechconf.com/?form=schedule
 
Yup, economics says Nvidia not pricing GK104 at 599 or more left a lot of money of the table. Money that probably now goes to the retailers pockets charging 579 or ebay.

Not near as bad as Nintendo who left who knows how much on the table for like the first 2 years of Wii :p

AMD must be cleaning up in most segments right now with a top to bottom 28nm lineup that's actually fully available. Well, maybe now that they adjusted 7900 pricing to be reasonable...

Completely aggree, when Charlie wrote: " Kepler will be a win", all was praise the Charlie words.. " if he write in his forum ( not an article ): Nvidia have low yield.... " It cant be true " .. I dont say i trust or like Charlie, but the contrast is really funny.

About 690, well if the yield are bad, this will not change anything: - a 700-800+ dollars dual cards is good for Halo, but the sales of thoses cards are so low.. and availibility can take 1month, it will not make a difference, what is important is reviewers got the card.

Now im a bit surprised Nvidia release it before AMD, so 2 questions: they are both incredibly close ( 680SLI vs 7970CFX are neck to neck ), and they think it is better to get 1-2 week alone with it. Or they cant release a 670 -660 right now. And i really doubt they are not ready with it, so the problem is for production... ( midrange cards as the 660 will be the more logical move in term of sales, they will not sell many cards at 550 and 700-800 dollars, and their actual lineup is just made of one card, a card who is good for review, but who will specially be good for sell other cards on this lineup.. ).

I have allways buy high end ( from the 9700Maya Pro era ), all time SLI and CFX ( from 6600GT SLI edition ), but i know most peoples around me will just buy a 300dollars card ( GTX660) not a 680 at 500dollars...

Well, from a business standpoint, that really depends mostly upon OEM wins. I really don't know how the two IHV's are doing there, but the excellent power consumption of the GTX 680 seems to suggest good things for future laptop parts.

As far as the AIB market is concerned, however, I'd be a bit surprised if AMD's sales hadn't slowed a bit since the GTX 680 benchmarks were put out there, given that the GTX 680 is superior to AMD's offerings in most ways.

You will never seen the 680 in a laptop, the laptop chips high range will be based on lower sku. And in this range, the laptop chips used will be based on 7870 from AMD and 660 from Nvidia. I need say, the turbo mode from Nvidia could on the paper be really interessant on low end chips for laptop, as it ensure a max tdp and the max possible performance, it will be sold like that anyway.. but the problem is how much difference it can do at low clock speed.. a 10% increase on clock when possible will not be so much profitable. It will make a good feeling on paper, but not in reality. Cause for laptop the average TDP in use and the max are really very close...
 
Last edited by a moderator:

That's more like it. If there's a dual-GK104 launching soon then this bad boy will probably stay under wraps for quite a while though.

Also....CUDA 5. I can't remember how many there were in the past but this GTC has a crap-ton of different sessions. Will there even be enough people there to attend them all?

S0641 - CUDA 5 and Beyond
Mark Harris ( Chief Technologist, GPU Computing, NVIDIA )

CUDA, NVIDIA's platform for parallel computing, has grown rapidly in the past 5 years. The performance and efficiency of software built on CUDA, combined with a thriving ecosystem of programming languages, libraries, tools, training, and service providers, have helped make GPU computing a leading HPC technology. CUDA 5 and the Kepler GPU architecture don't just increase application performance; they enable a more powerful parallel programming model that expands the possibilities of GPU computing, and language features that improve programmer productivity. In this talk you'll hear about these revolutionary features and get insight into the philosophy driving the development of new CUDA hardware and software. You will learn about NVIDIA's vision for CUDA and the challenges for the future of parallel software development.
 
AMD must be cleaning up in most segments right now with a top to bottom 28nm lineup that's actually fully available. Well, maybe now that they adjusted 7900 pricing to be reasonable...
AMD reports a $580m loss as graphics sales fall


the fact is that for much of the quarter the firm's hottest product, the Radeon HD 7970 graphics card, was in limited supply. Such were AMD's supply problems that the firm said revenue from graphics was down seven per cent from 2011, not a good sign given that it was top dog until Nvidia's Geforce GTX 680 launched at the tail end of the quarter.

Source: The Inquirer (http://s.tt/19CE5)
 
As the retail sale in the market of $550 graphics cards would mean a whole lot to the overall bottomline anyway... Crap article in general.
And not really related to your original qoute, except for the authors own perceptions.
 
Back
Top