NVIDIA Kepler speculation thread

This will happen. Either Intel or Nvidia's Project Denver will take the CPU socket. AMD's days in HPC are coming to an end.

Knight's Corner / Xeon Phi could potentially put an early end to nVidia's HPC ambitions. It will be fun to see them battle it out next year though. But yes for them to have a chance they probably need to own the CPU socket too.

It does not and didn't intend to. It just answered the question, why HT is a reason for Cray to use Opterons. ;)

Ok.
 
I would also imagine that for really big supercomputers that provide great PR, AMD offers attractive pricing that perhaps Intel does not.
 
I think the big machines are quite readily given discounts by either chip manufacturer. If Intel decides to not offer as big a discount, it might be because getting the equivalent perf/$ with a superior design doesn't require that big a discount. Perf/W is already so slanted in favor of Intel that the up-front costs of a slightly higher Intel chip would be eaten up by AMD's deteriorating power scaling.
 
Exactly. Nvidia shouldn't have brought Fermi over to the 600 series any more than AMD should have VLIW-parts in their HD7000 line-up.

Exact. They both make this strategy since on desktop market there arent enough demand for 2-3 gen old parts that they could easily get rid off in more consumerism oriented notebooks.

But two things are definitely weird. Bringing pretty consumable GF114 part into laptops, probable some DTR (desktop replacement) if there's still such market. As most of oldish rebranded AMD products are still budget mainstream line and can be implemented to support crappish Sandy Bridge based CPU

And more weird thing is that OEMs like freakin' DELL, HP when the deploy Fusion A10 (Piledriver core) which already have more than decent integrated GPU they also tend to join them with 7750M/7850M GPU (CapeVerde-GCN1 Pro/XT) and to claim weaseley CFX support while most of their customers could get much more competitively priced laptop (for 100-150USD less) against oldish Sandy Bridges if they just stick with damn Fusion Gen2 APU.

btw i never thought that MX parts could be more powerful than originally branded parts :D (referring to MX420/440/MX4000 based on GeForce2 competing against GF4/GF5 class cards a decade ago)
 
So now we know where all the GK110s went :oops:

You really think that those inside are GK110 chips? Its just too optimistic to say at least.


They, of course, might release it for desktop but basically they make huge margins on GK104, and without any performance pressure from the other party, then they will simply settle as much as they can with this situation. Even indirectly (or directly, who knows), both parties agree to keep the situation as it is.

Another opportunist :LOL: You really think envy have a working GK110 silicon already and yet they suppress themselves from claiming their bragging rights :rofl:

It was never mentioned what kind of FP performance do they offer in their Cray XK7 ;) on that link envydia was so kind to provide to us :D http://nvidianews.nvidia.com/Releases/NVIDIA-Powers-Titan-World-s-Fastest-Supercomputer-For-Open-Scientific-Research-8a0.aspx


http://www.hpcwire.com/hpcwire/2012-10-29/titan_sets_high-water_mark_for_gpu_supercomputing.html

(...) What I find interesting, is that the XK6 nodes obviously gets upgraded to XK7 ones in the process. Originally it was planned that the Tesla cards are just drop in extensions for the new XK6 nodes (it should have worked as Jaguar just got upgraded from XT5 to XK6). What is strange, is that HPCWire claims a max power consumption of 12.7 MW (up from prior 10.8 MW afaik), while the specs from Cray say only 54.1 kW per rack, same as with XK6. No idea what to make out of that.

You could put that the whole real life story is a pure wild estimation. Just like we're doing it right here just not claiming any expert proofs.

If it's really fully functional GK110 (well with SMX disabled according to your numbers) why then we shouldn't hear any specs yet?
 
Last edited by a moderator:
Yes, the 128bit bus is just odd... I guess they felt it would be too close to the 660 otherwise, but if that was the case, I think they would have been better off just using lower clocks on the 650 Ti. As for competing with the 7850, that isn't really its job. Nvidia's 4th tier card is the 660. The pricing reflects Nvidia's belief they can charge a premium for their name. For the high end, that may be true, but I doubt they will be so lucky in the value segment.

They already binned this parts by working SMx units. If they gone for another binning by speed that would require additional cost. I guess they placed some GTX650Ti into space just to mess around with AMDs sales and not to gives us best performance that chip could offer running at 192-bit bus. And btw all those GTX660/GTX650Ti are way overpriced just judging with DAMN SHORT PCB they use ... they could be a far cheaper parts than even HD7770 BOTH
But envy aint worried about some dumb customer that would walk into store and just bought a card judging by manufacturers shiny box and not by real performance. And those would be soon (which is GREAT9 replaced with better performing card hopefully again from same manufacturer (envy9 because people are just suckers for the branding.

Unfortunately this is how PC market works :rofl:


You're joking right?

You're joking right? (Not that you're funny to me, but at least you find yourself to be amusing)
 
I'm sorry I don't understand your post, or even why you are quoting me.

And yes, that is GK110 (AKA K20) in Titan.

http://www.cray.com/Products/XK/Specifications.aspx

http://investors.cray.com/phoenix.zhtml?c=98390&p=irol-newsArticle&ID=1750839

Or are you suggesting that Cray is lying?

I quote you about GTX650Ti being only 128bit and its marketing position its more than evident if you try and read.

And when its about Cray they never stated they use GK110 but it could be concluded from "Peak performance: 100+ Tflops per system cabinet -- NVIDIA® Tesla® K20 GPU Accelerators, up to 96 per cabinet" relation because GK104 itself have FP32 performance of 3Tflops so the numbers should be higher. Anyway they still didnt brag nothing about GK110 and that they was meant to be envydia not cray.
 
They explicitly state the GPU is K20 (GK110) in both the specs and press release.

But perhaps you should inform the Department of Energy they have been had.


keritto said:
I quote you about GTX650Ti being only 128bit and its marketing position its more than evident if you try and read.
I'm afraid I still do not understand, and quite frankly, I don't think I ever will. Additionally, given your irrational response on the Titan matter, I am inclined to disregard whatever else you might say. Sorry.
 

Excellent article..

2688CC ...

Im a bit worry about the clock speed for make they running.. ( 1536CC @1-1.1ghz/195W vs 2688CC @ ? )

I believe the Tesla card ( not like on the titan, but retail ), should not go over 225W. So with certainly a core speed around 800mhz max.
I really doubt after the good return on the consumer and press side about the TDP of the GK104, Nvidia want release a 250W+ gamers card.

( but at the same time, AMD have the same problem with their new series who will stay on 28nm process, if they have not improve dramatically the efficiency here and there, they will need more SP and so more TDP. )
 
Take away 3 GB GDDR5 and add the turbo that should keep GK110 within its TDP and I believe 800-850 MHz are possible. 250W TDP are okay as long as performance increases accordingly.

GK104 has about 165W consumption (card only) in games. GK110 should add 50% performance over GK104, then 250W are not a problem.
 
Back
Top