NVIDIA Maxwell Speculation Thread

So the only constraint would be chip yields... and my hefty price tags suddenly become a little less unrealistic.
 
So the only constraint would be chip yields... and my hefty price tags suddenly become a little less unrealistic.
Yields which, on current 28nm and after adding a healthy amount of redundancy or/and disabled cores, should be very decent.

Not that they ever will, but it's all very doable.
 
boxleitnerb said:
Because 20nm might be too broken in the beginning, too expensive, so that a big Maxwell@28nm would make sense?
I'm not quite sure you understood the challenge. I am looking for credible evidence that such a chip ever existed.

But just to indulge your train of thought here, you are saying Nvidia originally designed for 20nm decided that was too broken/expensive/etc, ported the design 28nm and then decided to cancel that one as well and go back to 20nm or is it 16nm now?
 
I'm not quite sure you understood the challenge. I am looking for credible evidence that such a chip ever existed.

But just to indulge your train of thought here, you are saying Nvidia originally designed for 20nm decided that was too broken/expensive/etc, ported the design 28nm and then decided to cancel that one as well and go back to 20nm or is it 16nm now?

I would rather imagine they designed for 28nm AND 20nm (with 28nm as an early release obviously and 20nm as the shrink/refresh), then skipped 28nm and pulled in 20nm (if possible).
 
Well you are certainly free to believe that if you wish... I would kindly suggest that you apply a healthy dose of Occam's razor.
 
How so? The simplest explanation for using 28nm is 20nm not being ready or economical enough. Nvidia would have had to make that decision at least 18 months ago when knowledge about the process was very different from what they know today.
 
Except now they are not using 28nm because the chip (which never really existed) has been been cancelled, remember?

boxleitnerb said:
The simplest explanation for using 28nm is 20nm not being ready or economical enough.
No. The simplest explanation is that Charlie lies for page hits.
 
Last edited by a moderator:

I think CUDA is doomed. Our industry doesn’t like proprietary standards. PhysX is an utter failure because it’s proprietary. Nobody wants it. You don’t want it, I don’t want it, gamers don’t want it. Analysts don’t want it. In the early days of our industry, you could get away with it and it worked. We’ve all had enough of it. They’re unhealthy.

-AMD’s VP of channel sales

http://www.hardocp.com/news/2013/08/07/amd_cuda_doomed/

 
You know this is the same guy that was saying nobody cares about AMD except for the Brits, yes? And before his job switch he'd have given you an earful of how ausum CUDA and PhysX are...so I wouldn't really pay much attention.
 
You know this is the same guy that was saying nobody cares about AMD except for the Brits, yes? And before his job switch he'd have given you an earful of how ausum CUDA and PhysX are...so I wouldn't really pay much attention.

Well, I think it is probable that part of his previous working experience was to emphasize on some things, and since he doesn't care any longer for that shit, he can be honest with people ;)

Perhaps it is good to post it here, maybe we expect Nvidia first this round:

TSMC starts production circuits 20 nanometers first quarter of 2014
 
He's probably wrong about CUDA in the HPC space, because it lets NVIDIA introduce new features faster than OpenCL would, and HPC people are usually willing to put in the extra effort.

But in the consumer market, yes, CUDA is dead and no one wants it. OpenCL is supported by AMD, ARM, IMG, Qualcomm, Intel, Vivante, and, well, NVIDIA. Why would anyone writing a consumer application bother with CUDA?
 
You know this is the same guy that was saying nobody cares about AMD except for the Brits, yes? And before his job switch he'd have given you an earful of how ausum CUDA and PhysX are...so I wouldn't really pay much attention.

what else would somebody do if they take a pay cheque from Nvidia or AMD? praise the competition ? silly fellow. most people work for a living. if you want your pay cheque you better work for your employer and not for the competition. talking down the competition is typical PR. you don't praise the competition.
 
He's probably wrong about CUDA in the HPC space, because it lets NVIDIA introduce new features faster than OpenCL would, and HPC people are usually willing to put in the extra effort.

But in the consumer market, yes, CUDA is dead and no one wants it. OpenCL is supported by AMD, ARM, IMG, Qualcomm, Intel, Vivante, and, well, NVIDIA. Why would anyone writing a consumer application bother with CUDA?

Well, are really there OpenCL applications on cell phones?, when they are rare on desktop already. How are the drivers? Can you even run anything OpenCL at all on an Atom's IMG GPU running Windows or Linux?

It's too easy to draw a ticked checkbox and pay lip service.
 
Well, are really there OpenCL applications on cell phones?, when they are rare on desktop already. How are the drivers? Can you even run anything OpenCL at all on an Atom's IMG GPU running Windows or Linux?

It's too easy to draw a ticked checkbox and pay lip service.

I don't think it's accurate to equate ARM chips with cell phones anymore. They're found in tablets, some of which have keyboard docks, and will gradually be introduced into laptops. Further, OpenCL support (both in hardware and software) will improve over time.

Still, even if you restrict the argument to PCs, you still get OpenCL support from AMD, Intel and NVIDIA. CUDA remains exclusive to NVIDIA, which means about 20% of the market. Hardly worth the effort when OpenCL will work on everything, albeit at some performance cost for NVIDIA GPUs.
 
Back
Top