Sandy Bridge

A couple years later you were hard pressed to find too many tranditional unix workstations around. Only ones that were still around where there because of specific apps.

PPro put everyone on notice.
Exactly so. I remember needing a Sun Sparc to run econometric simulations 'til Intel sent me a PPro for beta testing. That put an end to future Sparc workstation orders in the department.
 
When the first Athlon was introduced, you then had two vendors of really fast x86 CPU, the game was over if you didn't take care of dishonest Power PC G3 benches from Apple :).
Pentium III had replaced MIPS CPU in silicon graphics workstations.
Are you referring to the NT workstations? MS wouldn't support MIPS so of course SGI had to use Intel CPUs. SGI was attempting to create a cheaper workstation platform, not replace their higher-end machines.
 
OpenGL guy, NT4 had support for MIPS and Alpha processors. Unless you're thinking of something else?
 
I designed Alphas and StrongArm in the past...

But PPro really was the turning point. I remember working for CAEN(ran all the workstation and servers for the engineering school) at the university of Michigan. PPro was the point where even the ultra cheap deals we got out of the vendors for sun, hp, ibm, and dec workstations really weren't worth it anymore. We could get these Dell boxes even cheaper, slap linux on it and it ran everything just as well if not better.

A couple years later you were hard pressed to find too many tranditional unix workstations around. Only ones that were still around where there because of specific apps.

PPro put everyone on notice.

Yes, the Pentium Pro was where Intel really turned the tables through a remarkable technological tour de force. New microarchitecture adapting many of the new ideas to their ISA, top level manufacturing process coupled with unprecedented levels of transistors, to the extent that the CPU had to be manufactured as a dual chip affair with new packaging tech(!), and new levels of power draw making even Intel engineering question how on earth to cool this thing.

It was a technological knock-out, and it brought Intel to the performance level they desired.

(In a sense, it much resembles ATIs R300 chip in the GPU space - unprecedented feature set and performance, bought at the price of roughly three times the power draw of their earlier device. It got the job done - but was it really such a good move in retrospect?)

For a reasonably long time now and for most computational tasks, performance per thread/task has been adequate (luckily, since it has moved forward at a glacial pace for some time). Much of computing hasn't been concerned with absolute performance, but with performance per watt since that is what determines packaging density for the high end and arguably far more important - battery life, form factors and applicability in the mobile space. The "Moores' law to the rescue" approach that Intel has used to great advantage riding on their manufacturing prowess isn't quite as effective any more. Still useful - but it behooves us to observe that x86 has been kept competitive by remarkable engineering effort paid for by extremely large volumes in a monopolistic market, and at a relatively high cost in actual number of gates.
Now, going forward, how much the has game changed, what are the trends and what can the implications be?
 
Last edited by a moderator:
(In a sense, it much resembles ATIs R300 chip in the GPU space - unprecedented feature set and performance, bought at the price of roughly three times the power draw of their earlier device. It got the job done - but was it really such a good move in retrospect?)
Did you forget the smiley in your post or something? Would you have been happier if ATI had released NV30? ATI hit one out of the park with R300, I don't know anyone that wasn't impressed with it.

-FUDie
 
Considering that apps use 10s of MBs of data per frame in the form of texture, render target, depth and vertex data, I can see the L3 cache getting trampled by the GPU constantly, meaning it's usefulness to the CPU would be decreased. What about your desktop? Would RAMDAC reads also go through L3? If so, then just refreshing your display will pollute the L3.

I-d rather ask is this Sandy Bridge 4c w/ GPU is going to be dual channel as Lynnfield mainstream parts so it could have 64b dedicated for GPU from that ring bus to pull fro memory and put into L3 cache when needed for CPUs. On the other hand wikipedia mentions in its list something like 0-512MB DDR/GDDR so it might have sideport of it's own for the needs of intel GPU (intel-amd crosslicensing :LOL:) directly on-board 32b 512MB chip DDR4-2666 would probably be pretty attractive and easily reachable in 2011.
 
I-d rather ask is this Sandy Bridge 4c w/ GPU is going to be dual channel as Lynnfield mainstream parts so it could have 64b dedicated for GPU from that ring bus to pull fro memory and put into L3 cache when needed for CPUs. On the other hand wikipedia mentions in its list something like 0-512MB DDR/GDDR so it might have sideport of it's own for the needs of intel GPU (intel-amd crosslicensing :LOL:) directly on-board 32b 512MB chip DDR4-2666 would probably be pretty attractive and easily reachable in 2011.

Problem with that theory is that the Wikipedia article(and the original Intel presentation) mentions 64GB/s of bandwidth so unless you are willing to believe that for sideport memory, its unlikely.

Also with the die shot description some of the details on Wikipedia and original Intel plans are changed already. The cache latencies and sizes are different.
 
Still meh. Show me 8 cores/16 threads or go home!

:p j/k of course, the "mainstream" quad cores are still promising chips, but I am more interested personally in the enthusiast-class parts.
 
Maybe in Q1, 2011 Intel can release a chip that's as good as something NVidia and ATI put out...about 6 years ago. That's something to look forward to, I suppose. :LOL:
 
Back
Top