Yields which, on current 28nm and after adding a healthy amount of redundancy or/and disabled cores, should be very decent.So the only constraint would be chip yields... and my hefty price tags suddenly become a little less unrealistic.
I'm not quite sure you understood the challenge. I am looking for credible evidence that such a chip ever existed.boxleitnerb said:Because 20nm might be too broken in the beginning, too expensive, so that a big Maxwell@28nm would make sense?
I'm not quite sure you understood the challenge. I am looking for credible evidence that such a chip ever existed.
But just to indulge your train of thought here, you are saying Nvidia originally designed for 20nm decided that was too broken/expensive/etc, ported the design 28nm and then decided to cancel that one as well and go back to 20nm or is it 16nm now?
No. The simplest explanation is that Charlie lies for page hits.boxleitnerb said:The simplest explanation for using 28nm is 20nm not being ready or economical enough.
No it isn't.boxleitnerb said:It's up for debate which explanation is simpler
I think CUDA is doomed. Our industry doesn’t like proprietary standards. PhysX is an utter failure because it’s proprietary. Nobody wants it. You don’t want it, I don’t want it, gamers don’t want it. Analysts don’t want it. In the early days of our industry, you could get away with it and it worked. We’ve all had enough of it. They’re unhealthy.
Nvidia should be congratulated for its invention. As a trend, GPGPU is absolutely fantastic and fabulous. But that was then, this is now. Now, collectively our industry doesn’t want a proprietary standard. That’s why people are migrating to OpenCL.
You know this is the same guy that was saying nobody cares about AMD except for the Brits, yes? And before his job switch he'd have given you an earful of how ausum CUDA and PhysX are...so I wouldn't really pay much attention.
Well, I think it is probable that part of his previous working experience was to emphasize on some things, and since he doesn't care any longer for that shit, he can be honest with people http://www.sweclockers.com/nyhet/17...-kretsar-i-20-nanometer-forsta-kvartalet-2014
You know this is the same guy that was saying nobody cares about AMD except for the Brits, yes? And before his job switch he'd have given you an earful of how ausum CUDA and PhysX are...so I wouldn't really pay much attention.
He's probably wrong about CUDA in the HPC space, because it lets NVIDIA introduce new features faster than OpenCL would, and HPC people are usually willing to put in the extra effort.
But in the consumer market, yes, CUDA is dead and no one wants it. OpenCL is supported by AMD, ARM, IMG, Qualcomm, Intel, Vivante, and, well, NVIDIA. Why would anyone writing a consumer application bother with CUDA?
Well, are really there OpenCL applications on cell phones?, when they are rare on desktop already. How are the drivers? Can you even run anything OpenCL at all on an Atom's IMG GPU running Windows or Linux?
It's too easy to draw a ticked checkbox and pay lip service.