Nvidia BigK GK110 Kepler Speculation Thread

Ailuros from 3DCenter forums posted this PDF from NVIDIA: "NVIDIA Tesla® K20-K20X GPU Accelerators - Benchmarks. Application Performance Technical Brief."

At least some of it I've seen elsewhere.

I've been longer a member here at B3D than almost anywhere else ;) The link is on NV's homepage in the first paragraph here: http://www.nvidia.com/object/tesla-servers.html

The most interesting part in that marketing pdf above is how bad performance scales from 1 to 2 GPUs.
 
Actually Xeon Phi can boot Linux and it can run x86 software on it's own (remember that Xeon Phi consists of a few dozen relatively simple x86 CPU cores placed together on one piece of silicon). I believe that Intel has pulled wool over people's eyes by running Linpack on the Beacon supercomputing system without turning on the Xeon CPU's, primarily to get on top of the Green 500 list. The reality is that Xeon Phi is supposed to be used as a "co-processor", and no one in their right mind would use it in a supercomputing system without some high performance CPU's. Fortunately for NVIDIA, Project Denver will integrate CPU and GPU cores so that their card will be able to boot Linux too. Same goes for AMD presumably with their next gen card too.

Just because it's x86 doesn't mean it could actually run an OS. You have to deal with IO too. Can it access the hard drive? Read input from the keyboard? Sure, it can run the kernel level code, but it still needs the CPU to manage the peripherals (although these days the CPU doesn't directly manage them either...).
 
This just shows that many algorithms work poorly on systems without shared memory since they have to be synchronized between iterations.

Maybe its high time graphics IHVs consider alternative mGPU solutions especially for the professional markets.
 
What do you mean? The single K20 is around 5-8x faster than the dual *socket* Xeon...

There's no sign in my former posts that could refer to the CPU. Again performance scaling from 1 to 2 GPUs in those systems is in the majority of displayed cases mediocre at best. In fact if it would be even possible it would be interesting to see how a Tesla K10 behaves compared to a GTX680 (or lower) to see if anything changes for dual chip SKUs in those benchmarks.
 
There's no sign in my former posts that could refer to the CPU. Again performance scaling from 1 to 2 GPUs in those systems is in the majority of displayed cases mediocre at best. In fact if it would be even possible it would be interesting to see how a Tesla K10 behaves compared to a GTX680 (or lower) to see if anything changes for dual chip SKUs in those benchmarks.

Eh, I misinterpreted mGPU to mean mobile class GPU, which brought to mind all those pokey integrated GPUs, in place of a decent discrete GPU. So I assumed you meant something along the lines of replacing GPU power with CPU...
 
I've been longer a member here at B3D than almost anywhere else ;)

Logged in to confirm. This is true.

So has anyone figured out what Nvidia's big idea is? GK110 seems to me like the mystery of GPU history. It is the "GPU" without a GPU! This is a first for me.

Was GK110 really late? Or is it early?

If GK114 comes out and they never ship a GK110 GPU, is it because they fouled up GK110, or because GK104/GK114 is just really good?

I also don't understand the lack of tabloid speculation. Where are charlie d's horror stories!
 
If nvidia sells GK110 as fast as they make them for the professional and academic markets, they don't need to make a geforce with it. If you want a "real GPU", used to actually display things, they can make Quadro with it, and even a Quadro salvage part.

That would be unusual, but to make an analogy Intel created the Nehalem-EX line and its successors. It's the same tech as even low end consumer stuff, but totally unaffordable for the consumers.
 
Logged in to confirm. This is true.

So has anyone figured out what Nvidia's big idea is? GK110 seems to me like the mystery of GPU history. It is the "GPU" without a GPU! This is a first for me.

Was GK110 really late? Or is it early?

IMHO NV this time deliberately chose to not tape out the highest end chip first and for smaller chips to follow (as up to GF110), but chose with Kepler to give priority to mainstream and performance chips before the high end chip. From what I can tell GK110 should had taped out in early March 2012 and NV is producing and selling those chips since Q3 12'.

If GK114 comes out and they never ship a GK110 GPU, is it because they fouled up GK110, or because GK104/GK114 is just really good?

What's a "GK114"? :p I can smell a high end desktop SKU announcement being imminent albeit it'll probably stand behind the shadow of the gigantic Tegra4/Wayne SoC :devilish:

I also don't understand the lack of tabloid speculation. Where are charlie d's horror stories!

Charlie stated that there will be a desktop GK110 despite the typical rubbish you could read over the internet left and right.
 
If nvidia sells GK110 as fast as they make them for the professional and academic markets, they don't need to make a geforce with it.

The problem still remains that those tiny volumes despite the extremely high margins won't be good enough to cover their R&D expenses for that kind of project.

If you want a "real GPU", used to actually display things, they can make Quadro with it, and even a Quadro salvage part.

I wonder why GK104 has coincindentially 4 GPCs or why did it make it into servers that just need single precision. Taking a 550mm2 chip and disabling half of it for Quadros isn't exactly the best solution either; only if you have as crappy binning yields as with GF100.

In any case we'll see in a couple of days if they've changed again their mind like last year for CES.
 
That doesn't mean there won't be a GeForce GK110 though. There could be GeForce variants of GK110 and GK114.

As for not mentioning the GK110 in that article, it could be because he didn't think mentioning the GK110 was necessary or relevant for that particular report.
 
They can just wait it out until AMD plays its cards. Then release a GK110 with just enough HW enabled to surpass it in all benchmarks. Then even later release an Ultra refresh that's even faster.
 
They can just wait it out until AMD plays its cards. Then release a GK110 with just enough HW enabled to surpass it in all benchmarks. Then even later release an Ultra refresh that's even faster.

That is assuming GK110 has the potential to do that.
 
Lately nvidia has used a final letter "X" in some products
Maybe the 780 is GK114 and the GK110 goes on a 780X.
And you'll pay dearly to that, everyone likes X'es.
 
The problem still remains that those tiny volumes despite the extremely high margins won't be good enough to cover their R&D expenses for that kind of project.



I wonder why GK104 has coincindentially 4 GPCs or why did it make it into servers that just need single precision. Taking a 550mm2 chip and disabling half of it for Quadros isn't exactly the best solution either; only if you have as crappy binning yields as with GF100.

In any case we'll see in a couple of days if they've changed again their mind like last year for CES.

Nvidia has never made (or at least not in a long, long time) GeForce product announcements at CES. I am anticipating 90% of their CES event to be about Tegra 3 success stories and unveiling Tegra 4.
 
Nvidia has never made (or at least not in a long, long time) GeForce product announcements at CES. I am anticipating 90% of their CES event to be about Tegra 3 success stories and unveiling Tegra 4.

While the majority above might be true, they actually intended to announce Kepler last year at CES but backed off nearly last minute and claimed a business decision. Unveiling T4 for sure made my day; I haven't yawned as much for a long time.
 
While the majority above might be true, they actually intended to announce Kepler last year at CES but backed off nearly last minute and claimed a business decision. Unveiling T4 for sure made my day; I haven't yawned as much for a long time.

Yeah that whole presentation was pretty boring and JHH was not his usual charismatic self. I was also disappointed with the lack of info on tegra 4 over power consumption and gpu performance relative to the competition.
 
Back
Top