So I wonder if Nvidia might do real Fusion before even AMD? Wonder whether the GPU core will do CUDA and/or OpenCL and whether CPU and GPU will share address space?
What do you mean by real fusion?
So I wonder if Nvidia might do real Fusion before even AMD? Wonder whether the GPU core will do CUDA and/or OpenCL and whether CPU and GPU will share address space?
Reading about Nvidia's ARM plans, and Microsoft's Windows 8 announcement, I'm wondering if we're not standing on the threshold of a gigantic paradigm shift in the realm of personal computing - are we even aware of the immense implications for the future this could have?
That's because the number of important third party apps for mac are in single digits. Apple does most of the non OS apps for mac.This could be huge, huge, huge. Apple showed not just once, but thrice, that you CAN in fact switch basic hardware architecture, and do so quite successfully and painlessly! If the suits over in Satan Clara doesn't have the jitters already, they will soon I bet.
x87 has been deprecated for years now, even if nv hasn't gotten the memo. Also, what's wrong with little endian format?Personally I'm quite ready and willing to say FU to x86. It's lived long past its usefulness, the basic PC architecture is archaic and full of old crap that's dragging it down. Even things like the little endian binary format of x86, its stack-based FPU and so on just shows what a crazy fucked-up old system it really is. No, a clean re-start would be much preferable, and an end of Intel's domination of the semiconductor industry would be a great boon to us all too I bet.
Nothing IMHO. It's the sensible way to do things.IAlso, what's wrong with little endian format?
But none of these was even close to 5% market share.Probably more a reference to WNT and all the various architectures it ran on back in the day (MIPS, alpha, etc).
But none of these was even close to 5% market share.
Nothing IMHO. It's the sensible way to do things.
you can argue for big/little endian both ways (just turning your memory upside down enough time will make each of them logical in any situation).I thought that difference between endianness was sorta like potato/puhtato. Is there more to it?
I thought that difference between endianness was sorta like potato/puhtato. Is there more to it?
Not at all. There is no memory order for individual bits because bits don't have an address. Bit shifts work just fine with little endian.However where little-endian (byte-order) CPUs break down is that the bit-order is for some reason big-endian, making consistent bitshifts impossible.
eg. a 16 bit word will be arranged this way:
76543210 FEDCBA98
if you consume bits from memory (think of streams) you want to shift them out, but its impossible to get eg. 3210FEDC with simple shifts cause the bit-ordering is messed up.
What do you mean by real fusion?
AMD was not a serious competitor until Athlon came out in 1999. K6 was slow and on a shitty platform. K5 was neat but couldn't clock high enough.
Why is this thread in CPU forum?
I thought Maxwell should be discussed in 3d Arch&chips?
Awesome.Maxwell to use denver
What? I think you got too excited, take a cold shower now!Just...awesome. To think Nvidia is taking on Intel in the high-end CPU space, with Microsoft (silently, perhaps) backing them... Damn. That's just mindboggling news. Maybe there will be a day relatively soon when windows binaries will be dual ARM/x86.