NV30 pixel shader, very interesting...

I think the tricky part is going to be writting an OS that will make efficient use of it. It will also be interesting to see how much real-world comparative performance it is capable of. A teraflop processor isn't much use if you need to use a thousand flops to do what a 80686 processor can do in 10.
 
no hitches ?

well of course there is going to be some hitch... there is always one...

Plus if we means TMSC and company... well I do not see Intel having issues with .13um, I do not see them having issues with 90 nm while they alreayd have libraries for sub 90 nm technology.... I do not see IBM having problems with sub 100 nm SOI technology, I do not see Sony or Toshiba having problems with .13u ( die shrink of both EE and GS and it is rumored that the EE+GS chip at .13u is not quite a failed experiment if you catch my drift ), I see Sony and Toshiba having announced a 65 nm process ( completed, not implemented fully yet, but you have close to two years to do it ) and license of IBM manufacturing tech...

I see nVIDIA outsourcing their chip manufacturing to others having problems...
 
Crusher:

What operation could possibly need a HUNDRED TIMES more ops to perform on a Cell compared to a x86 processor? That's just plain baloney. What's the point of making up contrived scenarios like that one, really?

*G*
 
I believe the point was to take note of the fact that this is a completely different method of processing, and what has been optimized to be calculated fast and efficiently on the x86 processor, or a graphics GPU, might not be as fast or efficient to do on Cell. Especially on the software side, because even though the hardware itself might be capable of performing an operation as fast as an x86 or a GPU can, that doesn't mean the programmers are going to be able to implement it in the software that way.

Nobody is saying Cell is definetly going to fail, we're just speculating on why it might. Like many products that haven't been released yet, the only information available about Cell comes from press releases, so you have to examine the basic design philosophy yourself and find weak areas if you want to know what the possible points of failure are. Increased overhead to perform operations is one possibility.
 
Crusher... there is a Visualizer which is flexible but also provides some HW support for basic things I guess they do not require software developers to re-write the whole 3D pipeline, plus a lot of things are going to be provided in the HLSL...
 
Crusher:

I really don't see your point at all. You're just speculating wildly, and that doesn't really seem very meaningful to me. Now, Sony, Toshiba and IBM has got some DAMN smart people working on this thing, you seriously don't think they'd already thought of the possible weak points of this system and addressed them in a manner that is at least sufficiently cost-effective? :)

Nobody here actually knows what the Cell will look like, how fast it will be clocked, instruction set, efficiency etc, so what's the point of speculating about it?


*G*
 
Back
Top