NVIDIA GeForce GTX 980 Ti

Err... Intel actually spends a fair amount on driver development AFAIK. Maybe not as much % as their gross income, but that isn't really a fair comparison.
iGPUs that are more than two generations out receive no driver updates, hence my using them as an example.
 
Even if that´s the case, the R&D budget of a struggling company is lower, no?

30 to
50% of the income by month, who is more or less what is spending Nvidia .. At contrario of what peoples think, AMD is spending a lot of money on research and developpement.
 
Last edited:
Last edited:
30 to
50% of the income by month, who is more or less what is spending Nvidia .. At contrario of what peoples think, AMD is spending a lot of money on research and developpement.
Not only is AMD spending less than Nvidia on R&D, they're spending it on more varied product lines with less overlap/synergy.
 
They are also a strawman, hence my not addressing them.
When generations change every year, they're not a strawman. Not with Haswell GT3e level of performance, especially with it still actively selling.
However, Intel do still do bugfix releases for older GPUs. They do lack an officially stated policy though.
 
When generations change every year, they're not a strawman.
Nope, that is also a strawman. Or rather is is simply a factual observation that has absolutely nothing to do with the point. Just poor reasoning I suppose.
 
Nope, that is also a strawman. Or rather is is simply a factual observation that has absolutely nothing to do with the point. Just poor reasoning I suppose.
Not nothing. It is spending on drivers that influences how much Intel can support. So it does have something to do with the point.
Though I do agree that it does not prove the point. For example, if Intel was spending more than AMD or Nvidia, but prioritizing catching up in features on new products instead of fully supporting older ones, the same situation would arise.
 
Not only is AMD spending less than Nvidia on R&D, they're spending it on more varied product lines with less overlap/synergy.

The return in term of currency is different, but this dont mean they dont put the money where it should be ... The question of ressources i see offtly on forums, is really a false idea... But maybe it is not really the thread for speak about it..
 
Interesting. I guess they did modify the BIOS for that.

Heartbreaking for AMD..their latest cards are ending up in the middle of the pack...

Interestingly... oldie 390X (Hawaii) is working double time against 980!

What this modify bios you speak off? Pique my interest, with it will others 980Ti get up to 1.5ghz on the core?

There is a floating theory Nvidia is intentionally capping big Maxwell core clocks...or Maxwell core can run higher than what they are being sold for...
 
What this modify bios you speak off? Pique my interest, with it will others 980Ti get up to 1.5ghz on the core?
You can modify the BIOS in order to allow for higher power drain. Without that, the AMP Extreme did not go beyond ~1330 MHz in our tests in a _demanding_ gaming environment (I'm not talking Kindergarten like Crysis 3), let alone 1500 "rock solid". True, you can up the slider no probs, but as soon as the game gets more demanding, the card will throttle.
 
Depends. It more or less denotes the "leakiness" of a single ASIC. While high leakage parts indeed do tend to need higher voltage and consume more power, they are preferrable when used with more exotic cooling methods because they can also hit higher clocks. Low leakage parts (those with a high "ASIC quality" can run a bit cooler and more efficient within the normal boundaries of the product's spec. They tend not to achieve world record clocks though.
 
By the way, just to brag a little more about the fill-rate power of GM200, I'm finally able to play the good old L4D2 with 8xSGSSAA enabled on a QHD resolution at smooth 60 FPS 95% of the time, in a crowded server. :p
Only the more heavy particle effects are pushing it down a bit.
 
ha! I've yet to find a card that gives me v'synced 120 fps in Ultra-HD for Mechwarrior Online. :)
 
  • Like
Reactions: NRP
EVGA GeForce GTX 980 Ti KINGPIN Unleashing Tomorrow – Pre-Binned, ACX 2.0+ and Pure Copper Based Card Starting at $850 With 72%+ ASIC Quality

EVGA-GeForce-GTX-980-TI-KINGPIN_Official_1-635x635.jpg



Since this card is available in Pre-Binned options, there are four models to choose from. The $850 US model comes with 72%+ ASIC quality, $900 US model comes with 74%+ ASIC quality, $1000 US model comes with 76%+ ASIC quality and the $1050 US model comes with 80%+ ASIC quality. The first thing to know is that this card isn’t designed specifically for proper gamers, its designed for overclockers who want the best possible chip to break world records. Since this product is ahead in designs compared to many GTX 980 Ti’s (Custom models), the price is justified. Gamers who fall in love with this card should just go for the $850 US model and call it a day since they’ll be getting the same overclocking performance as the $1050 US model unless they are die-hard overclockers who use LN2 and water cooling, testing off various blocks and tweaking voltages to get the best possible numbers out of their cards.

....
With this new card EVGA is also introducing a brand new way to purchase, by allowing you to select the best card to suit your needs. For the first time ever, EVGA is introducing a way to select your approximate GPU ASIC (approximate OC performance) Quality before purchasing. Every single piece of silicon, whether it be a CPU or GPU, varies when it comes to maximum overclocking. On GPU’s, ASIC quality is one way to determine potential overclock performance. Please note this ASIC Quality* DOES NOT guarantee any specific overclock performance, it is merely a guide. The higher the ASIC Quality, the higher the potential overclock performance and the rarer the GPU. Of course, this can and will vary.

* ASIC stands for “application-specific integrated circuit”, which is a general term used to describe a processor designed for a specific task. All GPU’s have varying levels of ASIC QUALITY levels, which can be read from applications like GPU-Z. The higher the ASIC quality, the rarer the GPU and the higher potential there is for a better overclock. Of course, this can vary and does not guarantee any specific overclock. via EVGA

http://wccftech.com/evga-geforce-gtx-980-ti-kingpin-unleashed-pre-binned-acx-gm200/

Performance should be different that other factory 980 Ti's, especially if you have an EvBot laying around! Hopefully some reviews will come out.
 
I think it has been touched on around here (by Dave?)...high leakage ASIC (low scores) will overclock higher...

While KingPin has many practical experience in overclocking numerous Nvidia GPU....i would like to hear more from a theoretical insight from our resident engineers...how accurate are those ASIC figures, read from a third party apps (GPU-z), in determining Maxwell overclocking abilities?

It seems there are inner changes from Nvida Boost2.0 algorithms..?

Can we call it crazy to pay Titan X prices for a souped up 980Ti?
 
Back
Top