Nvidia BigK GK110 Kepler Speculation Thread

I thought they were sticking to 375W TDP? Unless they took some liberal definition, I can't see how they could boost to such frequencies and not exceed that. Well unless AFR doesn't work and one chip is idle anyway...
 
If they unlock the +20% TDP, it sure the limit is only the temperature..

But i really doubt retail cards without passing by the driver and unlock the tdp limit will goes this high on stock... or only for review lol.

On the screen you can see: max temp increased to 95°C ( its far high in comparaison the last driver ( the famous DX one ) have only increased the max temp limit from 83 to 87°C max.. then the TDP limit is increased to a max level of +20%...
 
Last edited by a moderator:
I don't think we will ever see the titan z at this point. Regardless of performance I don't think they'd be able to get away with selling the Z at more than $1500 . So the question is , can NVidia make a profit at that price point or would they make more money selling the blacks ?
 
At $3k, the profit margin would be absolutely ludicrously humongous. No consumer video card is worth that much money, regardless of its performance, because no reasonable, sane person would ever pay that much even for a full computer system, let alone merely a video card.

So yes, there's certainly room for price reductions, no doubt about that. The titan z is not made out of precious metals, so Jen-Hsun sure has plenty room to cut down on those ridiculous margins...
 
At $3k, the profit margin would be absolutely ludicrously humongous. No consumer video card is worth that much money, regardless of its performance, because no reasonable, sane person would ever pay that much even for a full computer system, let alone merely a video card.

So yes, there's certainly room for price reductions, no doubt about that. The titan z is not made out of precious metals, so Jen-Hsun sure has plenty room to cut down on those ridiculous margins...


Luckily for nVidia there are plenty of insane people on this Earth :D

Some people will buy it just purely because it's so expensive ...
 
No consumer video card is worth that much money, regardless of its performance, because no reasonable, sane person would ever pay that much even for a full computer system, let alone merely a video card.
It makes more sense if you buy it for GPGPU computation, especially one that needed dense DP compute power in a single workstation. Over the years I've bought GTX295s and GTX590s just to pack more GPUs per machine. A Titan, with its unlocked DP throughput and huge RAM, makes an attractive compute chip over say a GTX 780Ti. A Titan Z is a premium over two Titans, but the so you'd have to ask if the increased density is worth it.

Unfortunately, the fact that the Titan Z is a three slot wide card partially spoils my "dense GPU packing" argument.
 
With the price difference between a bunch of titan z:s and twice as many titan blacks you could afford to build additional chassis to house the titan blacks in, plus have change left in the bank.

Also, don't forget titan z is a 3-slot card, gaining you only 50% space savings vs. 2 regular-sized titans. You probably need to keep that fourth slot adjacent to your z clear anyway, to not obstruct the fan and cause the GPUs to run hot, causing downclocking and further performace reduction versus two regular titan boards. Meaning you pay more for less performance AT SAME GPU DENSITY.

...So, I really don't see any advantage at all actually. Not at three thousand fucking dollars anyway.
 
It makes more sense if you buy it for GPGPU computation, especially one that needed dense DP compute power in a single workstation. Over the years I've bought GTX295s and GTX590s just to pack more GPUs per machine. A Titan, with its unlocked DP throughput and huge RAM, makes an attractive compute chip over say a GTX 780Ti. A Titan Z is a premium over two Titans, but the so you'd have to ask if the increased density is worth it.

Unfortunately, the fact that the Titan Z is a three slot wide card partially spoils my "dense GPU packing" argument.

Bear in mind that for $3,999 you can get 16GB on a single GPU and 2.6TFLOP of DPFP performance, in a two slot, 300W form factor.
 
It could be some incentive to look into OpenCL. The advantage of not being tied to a single vendor would increase the flexibility with future hardware choices. And as a bonus, nV may improve their OpenCL support.

And frankly, I doubt a bit that for most people OpenCL is really no alternative. There probably are some areas where CUDA offers features which are impossible to use with OpenCL right now. But I would think this is a minority of all use cases. It's mostly inertia.
 
I don't know enough about CUDA and OpenCL to distinguish them in terms of features. But I do know that rewriting existing code is a non-starter for most companies. GPU compute, AFAIK, is still an almost exclusive professional affair.
 
AMD had something with the 7970 already, but couldn't make a name for its FirePro variants. Not enough support, libraries, languages, debuggers, or maybe they were not well known enough. But then the GTX Titan pulled the rug under it.

Now we're still waiting for HSA, or Opteron Kaveri (or maybe rather Opteron variant of Carrizo)

On the consumer space AMD seems to gain foot, with Adobe using mostly OpenCL.
3D rendering engines is another place to get hold, that 16GB card might be very useful there (though DP is not greatly needed in that application) and then maybe industrial/medical/scientific apps to visualize tons of data, that kind of stuff.

HPC market? I'm not aware of anything happening. But that's simply my superfical perception.
In HPC, the customers write the code [/edit : there's also the big name Linear Algebra libaries and such to reuse]. They don't care so much about the unit price (some department budget pays for it) ; density and power are important anyway ; but foremost they're worried about writing that code cleanly, easily, debug it and have it scale.

And there I'm just ignorant of what stuff is going on lately (what is there already, OpenCL 1.2, C++ AMP, HSAIL, some-thing-I-forgot-the-name-of) while nvidia simply says "Looks here! CUDA 5.0!", "Now with CUDA 6!"
("By the way, we have FORTRAN")
 
Last edited by a moderator:
I don't know enough about CUDA and OpenCL to distinguish them in terms of features. But I do know that rewriting existing code is a non-starter for most companies. GPU compute, AFAIK, is still an almost exclusive professional affair.

This assumes that companies purchasing (or looking to purchase) GPGPU cards already have CUDA code. I don't think that will be the case for most people.
 
Back
Top