NVIDIA Maxwell Speculation Thread

He it was not an attack, i was really think hes part of Intel, as we have some peoples here who are part of AMD, some gaming company. i have just made a misstake on his name. Why peoples want to think it is like an attack, i have no problem with that and it was not my purpose, what i mean is today monitor brand and scalar producer + AMD are investing how fix this.
It's @Andrew Lauritzen who works for Intel
 
I understand English isn't your first language, but you should try to be more careful about phrasing. Saying "I know you work for [X]" in that context sounds like an accusation, and wouldn't be a very nice thing to say even if it were true. Also simply because someone works somewhere doesn't mean they know about everything about the company; the vast majority of engineers (as opposed to managers/marketers) are focused on the interesting problems that they personally work on, and cannot be expected to know everything else that is going on.
 
For old time's sake, from July 2009:
Yeah, too bad ole Moore's been stumbling these past couple years, with regard to the usual 36-month cadescence, something you probably couldn't have easily foreseen in 2009, or at least not without being a high-level silicon engineer...
 
Moore's Law is always on the verge of ending or slowing down if you listen to some people; it just so turns out they were (partially!) right for once... Still, either AMD or NVIDIA could likely deliver 10 to 15 FP32 TFlops in early 2016 if they wanted to for a HPC-only part on 14nm, so we're not as far off as it might seem. The scaling always looks worse when you're at the end of a generation.
 
Last edited:
Moore's Law is always on the verge of ending or slowing down if you listen to some people; it just so turns out they were (partially!) right for once... Still, either AMD or NVIDIA could likely deliver 10 to 15 FP32 TFlops in early 2016 if they wanted to for a HPC-only part on 14nm, so we're not as far off as it might seem. The scaling always looks worse when you're at the end of a generation.
This article Intel delaying 10nm for 9 months? has been doing the rounds lately. Seems legit. And as has been mentioned, XXnm nodes have for some time been more of marketing terms for new lithographic processes than an actual size of any particular feature.
So in some respects, there is no question that progress has been slowing down the last decade or so. This is not necessarily disastrous, but the landscape is changing.
That said, you're perfectly right that the scaling always looks worse at the end of a generation. Samsung is in volume production on 14nmFF, and TSMC will be on 16nm FF+ in a quarter or so, and these processes will bring substantial benefits to GPUs when taken advantage of, particularly together with HBM pushing memory bottlenecks further out in time. Together with 10nm processes, we may still see a factor of four GPU performance increase the next three or four years. But beyond that, the crystal balls grow murky indeed, and I think it is unwise to deny the issues lithographic scaling is facing both in terms of technology and cost.
 
Interesting link, thanks. I wonder how much of that is technology vs economics. 10nm and below without EUV or Multibeam is really expensive (ah, how ironic that EUV is now seen as a cost-reduction technology!) - and if Intel somehow managed not to delay their roadmap, they would have to use multi-patterning at 7nm, which is crazy.

I really hope the industry eventually settles on multibeam as it could make the industry much more interesting (lower risk for start-ups, more chip revisions, etc.) but at this point it looks like EUV is a near certainty, unfortunately.
 
10nm and below without EUV or Multibeam is really expensive (ah, how ironic that EUV is now seen as a cost-reduction technology!) - and if Intel somehow managed not to delay their roadmap, they would have to use multi-patterning at 7nm, which is crazy.
Without an updated illumination source, the fuzziness in the resulting features that comes from working so far below the original wavelength keeps getting worse. Cleaning things up for a bit might buy some more leeway with what is going to be a difficult battle with variability, and maybe stave off concerns that once we start delving into sub-10nm that probabilistic behaviors at the quantum level will start impinging on the logical behavior of the circuits.

At the deeper end, I wonder if we reach practical limits for the number of atomic layers first, or if there is a threshold where additional transistor budget per node is hampered by the area and transistors devoted to reliability and error correction.

I really hope the industry eventually settles on multibeam as it could make the industry much more interesting (lower risk for start-ups, more chip revisions, etc.) but at this point it looks like EUV is a near certainty, unfortunately.
It doesn't seem like it can be sped up sufficiently to reach the desired scale.
 
Last edited:
Review: EVGA GTX 970 SSC in SLI vs. Titan X and R9 295X2
The GeForce GTX 970 remains a good GPU for full-HD or QHD gaming even with the recent 'Memorygate' issue fresh in enthusiasts' minds. Available from £255 in reference form and significantly faster once a high-profile partner has given it the OC treatment, such as EVGA with the SSC variant, putting two cards in one system offers enough performance for them to be a viable solution at a lofty 4K resolution.

http://hexus.net/tech/reviews/graphics/82060-evga-gtx-970-ssc-sli-vs-titan-x-r9-295x2/
 
970 is Maxwell, right? I'm getting one tomorrow, with a voucher for The Witcher III.

EDIT: it's tomorrow already, and it's now in my PC. Valley benchmark already run.
FPS:
92.7
Score:
3877
Min FPS:
28.0
Max FPS:
145.0

Render:
Direct3D11
Mode:
1920x1200 fullscreen
Preset
Custom
Quality
High
Multi-monitor
Wall 1x1

Platform:
Windows 8 (build 9200) 64bit
CPU model:
Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz (3392MHz) x4
GPU model:
BB Capture Driver 3.40.0.0/NVIDIA GeForce GTX 970 9.18.13.5012 (4095MB) x1
 
I was just testing my MSI 970 on Valley for awhile after applying a +50v/+100 core/+400 memory. Going to stress test it in GTA tonight and take it up some more if I can.
 
970 is Maxwell, right? I'm getting one tomorrow, with a voucher for The Witcher III.

EDIT: it's tomorrow already, and it's now in my PC. Valley benchmark already run.

I was just testing my MSI 970 on Valley for awhile after applying a +50v/+100 core/+400 memory. Going to stress test it in GTA tonight and take it up some more if I can.

Hey now the three of us have the same PC! Glad to see you finally rocking some serious GPU power!

pjbliverpool it's your turn for a 970. I went from a 670 to a 970 and it was like :oops:

And yeah GTX970 is GM204 (Maxwell v2). Malo and I have the MSI Gaming 4G 970. It's a badass card.
 
Nice! Playing some 3 screen Assetto Corsa on it now, which is a lot of fun.

EDIT: Three screen The Witcher 2 is a bit too much, but I do get a strong impression that they took it into account, as there's a lot to see. Will add some shots later.
 
Last edited:
pjbliverpool it's your turn for a 970. I went from a 670 to a 970 and it was like :oops:

Yeah I'm starting to feel the itch. Especially given I use 3dVision when it's supported which requires 2x the performance. I'm trying to hold out for Pascal though with that 14nm and HBM2 goodness!
 
Dude you'll be waiting for a long time. If you can spare the $330 the 970 is a terrific upgrade from a 670 or 680. Plus you can sell the 670 for at least $100.
 
Back
Top