NVIDIA's Project Denver (ARM-based CPU)

Reading about Nvidia's ARM plans, and Microsoft's Windows 8 announcement, I'm wondering if we're not standing on the threshold of a gigantic paradigm shift in the realm of personal computing - are we even aware of the immense implications for the future this could have?

I am split on the prospects of this thing. If they can get a CLR only app store going in time for this thing, then it might work out. If you think Intel will just sit aside and let MS walk all over their monopoly, think again. I bet Intel will crush this risk with their massive investments in arch and process tech. Also, within a year Medfield should out and I remember intel people claiming that they will close the power gap with arm by 32 nm.

This could be huge, huge, huge. Apple showed not just once, but thrice, that you CAN in fact switch basic hardware architecture, and do so quite successfully and painlessly! If the suits over in Satan Clara doesn't have the jitters already, they will soon I bet. :p
That's because the number of important third party apps for mac are in single digits. Apple does most of the non OS apps for mac.

Personally I'm quite ready and willing to say FU to x86. It's lived long past its usefulness, the basic PC architecture is archaic and full of old crap that's dragging it down. Even things like the little endian binary format of x86, its stack-based FPU and so on just shows what a crazy fucked-up old system it really is. No, a clean re-start would be much preferable, and an end of Intel's domination of the semiconductor industry would be a great boon to us all too I bet.
x87 has been deprecated for years now, even if nv hasn't gotten the memo. Also, what's wrong with little endian format?
 
I thought that difference between endianness was sorta like potato/puhtato. Is there more to it?
you can argue for big/little endian both ways (just turning your memory upside down enough time will make each of them logical in any situation).
However where little-endian (byte-order) CPUs break down is that the bit-order is for some reason big-endian, making consistent bitshifts impossible.

eg. a 16 bit word will be arranged this way:
76543210 FEDCBA98

if you consume bits from memory (think of streams) you want to shift them out, but its impossible to get eg. 3210FEDC with simple shifts cause the bit-ordering is messed up.

Probably doesnt matter often enough in practice but its still an incredible stupid lack of consistency.
 
I thought that difference between endianness was sorta like potato/puhtato. Is there more to it?

The history of little endian is that RS232 and other serial connections sent bytes with bit0 first.

That kinda just stuck.

And nobody says puhtato.
 
However where little-endian (byte-order) CPUs break down is that the bit-order is for some reason big-endian, making consistent bitshifts impossible.

eg. a 16 bit word will be arranged this way:
76543210 FEDCBA98

if you consume bits from memory (think of streams) you want to shift them out, but its impossible to get eg. 3210FEDC with simple shifts cause the bit-ordering is messed up.
Not at all. There is no memory order for individual bits because bits don't have an address. Bit shifts work just fine with little endian.
 
Yeah the '90s were the times of RISC being seen as the Intel killer. MIPS, ARM, Digital, Sun, PowerPC, etc. In the end Intel went RISC too in their CPUs and everybody else was annihilated. :) Well, they are all still around, but they aren't in desktop machines really. MIPS is in tons of network stuff. ARM is running phones. PowerPC is server stuff and consoles...

AMD was not a serious competitor until Athlon came out in 1999. K6 was slow and on a shitty platform. K5 was neat but couldn't clock high enough. And before that they were pretty much an Intel second party. Intel was dominating personal computing with their CPU prices starting at around $300.

Right now we have a sort of mobile computing renaissance going on that is really creating a new computing future as it evolves. Also most people are buying notebooks now instead of desktops. So the focus is becoming power efficiency. Intel can't seem to get into the really small mobile devices because x86 doesn't seem to be able to scale down well enough even with their awesome manufacturing capabilities. Atom was essentially their attempt to do that. Fortunately for them netbooks arrived when it did otherwise I think Atom would have completely bombed.
 
Last edited by a moderator:
AMD was not a serious competitor until Athlon came out in 1999. K6 was slow and on a shitty platform. K5 was neat but couldn't clock high enough.

K6 was not shitty,
K6 had a very good integer core, BUT:
1) It did not have pipelined FPU
2) It had worse memory architecture than Pentium 2, it's L2 cache was far behind slow bus.
(because AMD had to use the P5's bus protocol, they did not have licence for P6's bus, and they did not have resources & market momentum to create own good bus)

With similar memory architecture K6 core hold it's own against P6 in integer performance(K6-3 vs Dixon, K6 getting much better IPC, but P6-based chips could clock a bit higher on same mfg process)

And K5 was the fail, they designed a chip which had good IPC(which though was only some 5-15% better than K6's IPC) but it could only clock to half of the clock speed K6 could later clock with same mfg process.
What matters is performance = IPC * clock_rate, not either alone.
 
I said K6 was on a shitty platform and it was. Super 7 was terrible because of cheaply-built boards and low quality chipsets from VIA and ALI. Intel was so far ahead in platform quality that it was really quite incredible.

Athlon had the same problems for the first few years, pretty much until NVIDIA came into the picture with nForce. How many revisions of crap did VIA make? They didn't seem to get AGP and PCI working really well until like KT333!

If you could get your K6 system stable, usually by using 3dfx AGP cards, or by only using PCI, it was a nice CPU for everyone who didn't play a lot of 3D games. The low price, the result of not being capable of competing with Intel's quality and performance, was attractive to everyone though.

K5 was in some ways more interesting to me than K6 because it was entirely home-built. K6 was bought from Nexgen. K5 was AMD's first in-house design of a x86 CPU, and it was very advanced. It was similar to one of their non-x86 RISC CPUs and is one of the first RISC-like x86 chips. They just didn't really know how to bring everything together to beat the Intel monster. K6 didn't pull it off entirely either. AMD had to buy Alpha engineers to finally get serious.
 
Last edited by a moderator:
I am not very optimistic about Win8 on ARM. Microsoft will probably bungle it all up. Are we heading to a world of fat Windows binaries?
 
Maxwell to use denver
Awesome.

Just...awesome. To think Nvidia is taking on Intel in the high-end CPU space, with Microsoft (silently, perhaps) backing them... Damn. That's just mindboggling news. Maybe there will be a day relatively soon when windows binaries will be dual ARM/x86.
 
Just...awesome. To think Nvidia is taking on Intel in the high-end CPU space, with Microsoft (silently, perhaps) backing them... Damn. That's just mindboggling news. Maybe there will be a day relatively soon when windows binaries will be dual ARM/x86.
What? I think you got too excited, take a cold shower now! ;):D
 
That's what the piece claims... Very high performance CPU core for use in supercomputing.

Anyway, I've already had my cold shower(s) for the day. Took a nice sauna earlier, had a chat with a guy studying astrophysics, it was very interesting.
 
Back
Top