Toshiba, Sony close to 65nm sample production

I don't know anything about AI, but on the physics side of things you can eat up as much processing power and memory as you can afford (cloud simulation, water simulation, cloth simulation which avoids self-intersection, more advanced breakage/deformation models, etc...).

Serge
 
Panajev2001a said:
Dio said:
Dave, I offer you the same advice Simon gave me a few months back in a similar position: give it up mate. There's no point.
Well, thanks Dio... I am here trying to have a decent conversation ( sometimes people disagree ) and I do not think we should be that negative.
I was not advising Dave to give up the thread altogether, just give up the bits that there was no point to :).

Sorry for not being quite clear, but I think Dave knew what I meant (he also had the advantage of remembering the earlier post).
 
Lets just hope that Cell implements an automatic APU power distribution scheme, similar to one that Tokyo guy described in an another thread.
 
DaveBaumann said:
Panajev2001a said:
If you really look at it you will see that one shoe is not really fitting all workloads/activities or trying to, but it is that we have several people that happen now to want to do a very similar activity and they basically all need the same shoe model just in different sizes as that model in particular is the best for that activity and they do not care much for others activities s that model will do fine enough.

When I said one size fits all, this is what I mean - I should have said "one size scales to fit all". You still have a tricky balance to meet the right processing requirements at the lowest level and not end up with redundancy in certian applications when you scale that up. I'm not saying you can't meet your requirements with sucess, but it seems inevitable you are going to end up with some reduncany some wher in some applications you put it to, which goes back to the more focused units discussion earlier.

Sure you are going to end up with redundancy.
The cell approach was conceived in order to reduce redundancy vs continuing to evolve CPUs along their current trajectory. (The problem being that dedicating more and more control logic and functional units to extract maximum instruction level parallellism yields diminishing returns. Apart from system design costs you end up with ever growing parts of your CPU being idle waiting. Intels (and IBMs) hyperthreading are examples of how chip designers try to make more efficient use of these otherwise idle resources - again at a cost of increased control logic, and with extremely limited scaling possibilities.)

The cell approach wasn't originally conceived to replace dedicated ASICs. That is not to say that it cannot be applied at that level as well. The processor envisioned in the patent we all believe outlines the processor of the PS3 looks pretty damn targeted toward certain kinds of processing and will likely be pretty efficient at performing such tasks. It's likely to very inefficient indeed at the kind of single threaded clerical tasks that x86 is targeted at.

On the other hand, most of those tasks are efficiently handled by CPUs found in cell phones today, so maybe it's time for a change in focus.

What you guys are mostly debating is how efficient the PS3 processor will be at tasks that correspond to the work of the CPU + part of the GPU in a PC (Xbox) system. That's a relevant question in a forum like this, but my interest in this processor is completely unrelated to gaming or graphics. It is grounded in a constant interest in tools for attacking certain computationally intensive problems, and in a general interest in the directions of computers in general. Massive parallellism hasn't been a viable alternative in consumer space so far. But transistor densities are moving in a direction where it will become so. The software cell concept scales well with the number of processing elements, while the current PC paradigm doesn't. If and how this will affect mainstream computing remains to be seen. But it's an interesting question, no?
 
psurge said:
I don't know anything about AI, but on the physics side of things you can eat up as much processing power and memory as you can afford (cloud simulation, water simulation, cloth simulation which avoids self-intersection, more advanced breakage/deformation models, etc...).

That's true, but honestly current hardware is fast enough for alot of that, it's just that implementing it in software is so complicated that it hasn't happened yet.
 
DaveBaumann said:
But not necessarily to the same level of performance ? i.e. if you have one APU that?s needs the multimedia capabilities to display at 640x480, when you scale that up to the PS3 that?s could be an overshoot for TV displays.

Certainly there could be, but we see "overshooting" all over the industry. (Most certainly PC's, where a competent gaming rig will be WAY better than necessary for any other task but, say, specific ones with fierce requirements one way or another, such as 3D rendering.) This doesn't HURT anything else, and it's not a waste of power, since the power is there exactly for what NEEDS to be tapped. By extension, other than those same tasks, a computer I build and tweak fiercely to be designed for gaming/multi-media carry over those other tasks as a simple matter. And while it's not a proper analogy from a PC perspective, most chips with a semblance of power and programmability could be tapped to perform lower tasks; with time and attention paid to programming, proabably quite well.

In the case of devices like televisions, I assume "power waste" is shruggable unless there is also "cost waste" going in. If CELL scales as intended they can push the size and cost down to where it does the job they want it to do best, and even if there's overkill they may well still be spending less money on the chips than they would buying them from another company. (And/or not have to worry about keeping up with separate chip designs for that type of device or occupy fab time for them...) And of course there's much talk about the interconnectivity of CELL devices, which if it can be aligned and tapped could put overages to practical use otherwise. (And just how pleased as punch would they be--and customers be--if CELL-based Sony televisions can make the PS3's hooked up to them perform better?) Personally I think that's an "eventuality" at best and will take a while to realize, as we likely won't see popular CELL-based devices one would hook one's PS3 up to, and I don't think we'll see the software support that in particular would need at launch, nor developers that would know how to utilize it well. But if it is indeed "possibility" then it turns "overkill" into both technical and marketing advantage, and since they would seem to be prepping CELL for a long and varied life, it could be rather droolworthy.
 
cthellis42 said:
DaveBaumann said:
But not necessarily to the same level of performance ? i.e. if you have one APU that?s needs the multimedia capabilities to display at 640x480, when you scale that up to the PS3 that?s could be an overshoot for TV displays.

Certainly there could be, but we see "overshooting" all over the industry. (Most certainly PC's, where a competent gaming rig will be WAY better than necessary for any other task but, say, specific ones with fierce requirements one way or another, such as 3D rendering.) This doesn't HURT anything else, and it's not a waste of power, since the power is there exactly for what NEEDS to be tapped.

This is where you are wrong.
But you are conditioned by the industry to feel as you do.
You are paying in size, noise, power consumption and money. That's a fairly large total price to pay, IMHO.

The computers typically used at work for non-computational tasks are roughly as powerful as PDAs. Apart from the drawbacks inherent in typical Wintel desktop systems, they do the job well. But they could be much smaller, completely quiet, much more energy efficient and generally fit better into the offices they populate. Look at the smallest VIA EPIAs, or PDAs for inspiration. Or the flat-panel iMac for that matter, where the entire computer doubles as the base of the ingenious panel support.

The industry already feels the effects of this in the migration towards mobile computers. Which is occuring despite the fact that most offices and institutions still insist on stationary systems, and despite the lower price/performance and typically higher absolute price of portables. The personal computing landscape is ripe for change, it is changing, but as yet it is changing while largely staying within the PC paradigm.
 
nondescript said:
I don't see the conflict between multimedia, networking, and 3D graphics. Since the APUs are really more like general purpose DSPs, I think the 3D graphics capability is easily transferrable to multimedia. With things like MPEG encoding/decoding, all you really need is a fast 2D-FFT's and 2D-IFFT's. If you have a DSP-like architecture with MAC units, implementing the decimation algorithm will be fast. You're going to be passing substantial data around, so you need the inter-APU bandwidth. Bandwidth you would need in passing physics, textures, vertex, and pixel data anyway.

I guess the key is that multimedia, networking and 3D graphics all share the key trait of data and logic locality. They are data heavy, and loosely logically coupled. Since the vast majority of the computations do not depend on other computations, they can be carried out in a parallel fashion. And even if they are dependent on each other, they are mostly dependent on local data. Physics calculations don't need to know what's happening on the other side of the game world, until their effects progagate over. Because multimedia, networking and 3D graphics share the same kind of processing requirements, they can be met by the same type of processor.

Obviously, there are some optimizations that are application specific, like hardwired shaders, that PS3 will not have, and it will effect its performance. But I don't think there will be the multimedia or network overkill Dave implies.

Touche', good post rationalizing what I was trying to say in 5 posts :(
 
Note that I was using those two examples as ones that Panajev brings up. The point I'm making is that you need to strike a balance of abilities / performance for the lowest level of application you are going to set them to in relation to the scaled up versions.
 
...

I think you people are misunderstanding the true motivation behind CELL; CELL is not about delivering Teraflop in a box(You would have to link 4~8 PSX3s for that to even claim such paper spec); it really is a post-PC home networking standard.

Unless Kutaragi and Co. are dumb, SCEI engineers are very well aware of the fact that games are unsuitable for Blue Gene style parallel processing and the number of people who could even attempt such code is very limited. Then what is the motivation behind CELL? Low-cost home networking infrastructure; nothing more, nothing less.

It is next to impossible to make CELL/PSX3 deliver significant amount of FLOPS for any single application, but the applications can interact more freely using CELL.
 
...

What CELL brings to the table is not TERAFLOP games, but chip level built-in networking and SSI(Single System Image), so that any number of independent processes can be made to cooperate via message passing.
 
I think you people are misunderstanding the true motivation behind CELL; CELL is not about delivering Teraflop in a box(You would have to link 4~8 PSX3s for that to even claim such paper spec); it really is a post-PC home networking standard.

Unless Kutaragi and Co. are dumb, SCEI engineers are very well aware of the fact that games are unsuitable for Blue Gene style parallel processing and the number of people who could even attempt such code is very limited. Then what is the motivation behind CELL? Low-cost home networking infrastructure; nothing more, nothing less.

It is next to impossible to make CELL/PSX3 deliver significant amount of FLOPS for any single application, but the applications can interact more freely using CELL.

Nope.

Infact every claim by IBM or Sony prove that your talking from your ass again.

Broadband Engine isn't about shoving a few low cost cores onto a chip.
 
ERP said:
Fafalada said:
ERP said:
Just how much code in a game is actually parallel in nature?
Outside of the rendering problem, how much of the game logic in your average game is going to be able to exploit multiple processors effectively?
Avoiding paradigm shifts - you can always pipeline the MPU unfriendly parts :p

There just isn't that much stuff in the average game outside of rendering that works on a lot of sequential non dependant data. Besides pipelining is only a win when communication overheads don't kill you.

Physics and A.I. are both things that I see having more than decent parallel implementations.

Also, going by the patent I would not worry too much about communication overheads ( well of course the ideal is to re-calculate if it is possible and not re-send and if we apply this idea when we can we will use the bandwidth available better ): 1,024 bits busses and fast e-DRAM are there for a reason.

CELL's architect and IBM fellows have stated in the past that the problem with the kind of architectures they see emerging is not computation, but data management so I do not expect them to overlook this issue with CELL.

Fafalada said:
Anyway I realize specific details don't relate to 'average application' scenario, but I would argue that the main bottlenecks of display list building I was faced with could be distributed across MPUs (if I had them) quite easily, and efficiently.

I would argue that the main bottlenecks for display building are reading and copying data. Doing that over a communication link between processors is probably going to cost more than doing to memory.

Certain memory sandboxes could be set-up to be shared areas between APUs, sort of where communication would happen ( say like a local lobby :) ) and all could be handled by the APUs themselves ( the PUs woul handle the setting-up of the memory sandboxes and all ) through DMA.

Again, I do not think we should forget that regarding the patent embodiment each APU in a PE is connected to a PE bus which is 1,024 bits wide and that, if utilized corrctly ( if we keep trying to send 16 bits packages each cycle we are kinda wasting the width of the bus itself, we would have to break communication data in 1,024 bits boundaries or 128 bytes packets if possible ), should provide plenty of inter-APUs bandwidth.

I'm not really trying to say that games can't make use of multiple processors for there core logic, just that the way most are currently constructed, it's difficult and time consuming, not to mention debugging hell. And in a deadline oriented industry than usually equates to won't happen often.

I understand this last part ERP and I have no criticism of it.

I would say that a big part for STI and for Sony and Toshiba is the software side of things: compilers, APIs, tools, Middleware, etc...

I think that next-generation we will see Renderware even stronger than it is now and I think Criterion is working closely with Sony and Toshiba in developing middleware and simulation tools for PlayStation 3.

If things pan out the way they seem to be going we will not see PlayStation 3 before 2006 and I do not think it is the Hardware side of things that is lagging, but the software side is taking a while to be perfected: I do not think Sony will produce another PlayStation 2 like scenario with incomplete/untranslated documentation and scarce high-level API support.
 
Re: ...

Deadmeat said:
I think you people are misunderstanding the true motivation behind CELL; CELL is not about delivering Teraflop in a box(You would have to link 4~8 PSX3s for that to even claim such paper spec); it really is a post-PC home networking standard.

Unless Kutaragi and Co. are dumb, SCEI engineers are very well aware of the fact that games are unsuitable for Blue Gene style parallel processing and the number of people who could even attempt such code is very limited. Then what is the motivation behind CELL? Low-cost home networking infrastructure; nothing more, nothing less.

It is next to impossible to make CELL/PSX3 deliver significant amount of FLOPS for any single application, but the applications can interact more freely using CELL.

Weird how Sony guys at last GDC were asking to developers how accustomed they were to SMP coding in games, huh ?
 
Entropy said:
DaveBaumann said:
Panajev2001a said:
If you really look at it you will see that one shoe is not really fitting all workloads/activities or trying to, but it is that we have several people that happen now to want to do a very similar activity and they basically all need the same shoe model just in different sizes as that model in particular is the best for that activity and they do not care much for others activities s that model will do fine enough.

When I said one size fits all, this is what I mean - I should have said "one size scales to fit all". You still have a tricky balance to meet the right processing requirements at the lowest level and not end up with redundancy in certian applications when you scale that up. I'm not saying you can't meet your requirements with sucess, but it seems inevitable you are going to end up with some reduncany some wher in some applications you put it to, which goes back to the more focused units discussion earlier.

Sure you are going to end up with redundancy.
The cell approach was conceived in order to reduce redundancy vs continuing to evolve CPUs along their current trajectory. (The problem being that dedicating more and more control logic and functional units to extract maximum instruction level parallellism yields diminishing returns. Apart from system design costs you end up with ever growing parts of your CPU being idle waiting. Intels (and IBMs) hyperthreading are examples of how chip designers try to make more efficient use of these otherwise idle resources - again at a cost of increased control logic, and with extremely limited scaling possibilities.)

The cell approach wasn't originally conceived to replace dedicated ASICs. That is not to say that it cannot be applied at that level as well. The processor envisioned in the patent we all believe outlines the processor of the PS3 looks pretty damn targeted toward certain kinds of processing and will likely be pretty efficient at performing such tasks. It's likely to very inefficient indeed at the kind of single threaded clerical tasks that x86 is targeted at.

On the other hand, most of those tasks are efficiently handled by CPUs found in cell phones today, so maybe it's time for a change in focus
.

I see that we are on the same page.

I do feel that most of those single threaded "clerical" tasks current General Purpose CPUs are targeted at are, for what concerns most Desktop PC users ( not professionals who need ultra fast logic synthesizing tools or other specialized software ), not in need of the power and transistors current CPUs throw at them.

You take an ARM CPU, you clock it at 600 MHz and you could safely run your OS, Web Browser and Word Processor.

Even current CPUs can try to expand and meet the 3D Gaming and general multi-media request for performance, but for those CPUs will be VERY costly as you have to dulicate a much more massive set of logic: if each core had to be based on a Pentium 4 you would burn out your logic budget quite quickly and if you share too many resources on the front-end you will be bounded by Fetching, Decoding and Issuing problems.

CELL and other similar architectures were targeted at those applications who really need performance in the Consumer Electronics world.

What you guys are mostly debating is how efficient the PS3 processor will be at tasks that correspond to the work of the CPU + part of the GPU in a PC (Xbox) system. That's a relevant question in a forum like this, but my interest in this processor is completely unrelated to gaming or graphics. It is grounded in a constant interest in tools for attacking certain computationally intensive problems, and in a general interest in the directions of computers in general. Massive parallellism hasn't been a viable alternative in consumer space so far. But transistor densities are moving in a direction where it will become so. The software cell concept scales well with the number of processing elements, while the current PC paradigm doesn't. If and how this will affect mainstream computing remains to be seen. But it's an interesting question, no?

That is a very interesting question :)

I understand your view and I recognize myself that the Software side of the whole CELL project is a HUGE task that we cannot overlook: CELL OS, CELL compilers, CELL programming APIs, tools, etc...

Since CELL is not restricted to gaming, they have to take those ideas into consideration: IBM at least is thinking about CELL outside CE devices, they are always interested adding a potentially succesful architecture unde their belt for even their Server and Workstation business if that can help them make more money in services.

I think it is interesting that finally technologiy and certain applications ( 3D Graphics and other multi-media and networking applications ) are summoning the Massively Parallel approach in computing as we have a chance to see it applied to other tasks computers have to perform: the fact we have a software side catalyst, as I keep saying :), will help to give programmers a reason to try to solve older problems in a different way and this in turn will help Parallel Architectures as they will have even more reasons to exist.
 
DaveBaumann said:
Note that I was using those two examples as ones that Panajev brings up. The point I'm making is that you need to strike a balance of abilities / performance for the lowest level of application you are going to set them to in relation to the scaled up versions.

What kind of abilities do you see in the APU ( this is the building block and scaling is basically adding more APUs in the PE and then adding more PEs to the system ) that would result in the problem you mention ?

I do see where you are coming from though...

If you include a certain number of built-in abilities in the lowest end version, scaling up the power by adding enough of the basic logic constructs might end up with a value for those built-in abilities that scaled up too much.

You are sort of worried that there might be certain things in each APU that would scale too fast by adding more and more APUs, but that the need that 3D graphics puts on us requires us to add so many APUs to make those things way to fast to a point where the speed of those things is not needed and they become redundant.

I see the APUs including general capabilities that fit the tasks I mentioned because those tasks do focus on lots of bandwidth + high parallel processing power and the way APUs and PEs are set-up they provide both: if you need better performance in one application you can add APUs to speed up that application if that application scales well with the increasing number of APUs.

What about other applications which did not need the extra power ? Well, they will not use any extra APUs.

I see it more as a resource management issue which can be solved than anything else, but I welcome your inputs on things I am sure I mght be overlooking.

Ok, so let's think again at 3D Graphics and Networking.

Let's think about a PDA and a PlayStation 3 class console.

On the Networking side, if we just take a quick look at it, they might be both working on similar scenarios... moving packets at approximately the same rate.

On the 3D Graphics processing side though I expect that most users would not look for the same 3D processing capabilities as they would in PlayStation 3 ( even though looking at the PSP's specs I see the bridge between consoles and PDAs in terms of Graphics to be shortening ).

This would mean that we could work with less APUs in the PDA and produce a more than acceptable level of performance.

Now on PlayStation 3 we would need higher performance and we are lucky that 3D Graphics do not mind increased parallelism hence the use of more APUs, scaling the system upward.

We just said that the level of performance was enough as far as Networking was concerned ( although I would doubt that a PDA would function as your home LAN Home Server directing the orchestra of the house CELL enabled devices... ).

So, are we left with tons of unused logic for Networking because we decided to scale our system up ?

Not really...

We only added APUs which were needed to bring 3D performance up: Networking performance did not really go up as we might use all of the APUs we added in our Rendering code leaving the same amount of perfomance available to Network processing.

APUs basically contain 4 FP Units and 4 FX units, can do both SIMD and Scalar work, have a local memory ( LS ) and a Register File... that is about it... the instruction set of each APU is very tight and I do not see features that would badly scale as you seem to worry about.

Well, except the focus on strong Vector Math ( which utilizes the APUs better ) requirements ( which we find in 3D Graphics and Image Processing [which both scale well] ).
 
Geez Panajev, you've been working overtime here... ;)

ERP said:
There just isn't that much stuff in the average game outside of rendering that works on a lot of sequential non dependant data. Besides pipelining is only a win when communication overheads don't kill you.

Faf, Archie, could you guys enlighten the rest of us? I know physics simulations can be easily done in parallel (I've done it before), and I know graphics are, how about the other aspects of games?

ERP, what do you mean by "pipelining" - pipelining usually refers to the multiple stages in a single processor.

Entropy said:
You are paying in size, noise, power consumption and money. That's a fairly large total price to pay, IMHO.

The computers typically used at work for non-computational tasks are roughly as powerful as PDAs. Apart from the drawbacks inherent in typical Wintel desktop systems, they do the job well. But they could be much smaller, completely quiet, much more energy efficient and generally fit better into the offices they populate. Look at the smallest VIA EPIAs, or PDAs for inspiration. Or the flat-panel iMac for that matter, where the entire computer doubles as the base of the ingenious panel support.

The industry already feels the effects of this in the migration towards mobile computers. Which is occuring despite the fact that most offices and institutions still insist on stationary systems, and despite the lower price/performance and typically higher absolute price of portables. The personal computing landscape is ripe for change, it is changing, but as yet it is changing while largely staying within the PC paradigm.

Bingo, check out this post I made, one of my first on this board. Pretty much say the same thing. Modern CPUs are overkill for office tasks, and using x86 architecture for SIMD-friendly data like media and graphics is not really optimal, and a parallel architecture is the way to go. I think we're going to agree on a lot of things :D

---------------------------------------
Back to the problem of performance overkill:

Dave, I see your point, and I would agree with you if there was dedicated logic built into each APU. Let's say the APU's were Handheld Engines instead. You'd be absolutely right. Stack 16 of them together, you have 16 MPEG decoders, 16 sound chips ... utterly useless and obviously redundant.

But APUs have none of that - they are general-purpose silicon that can perform many tasks. (With, as Panajev said, a heavy emphasis on vector operations)

Of course, I don't expect linear scaling with APU's, because communications overhead will eat into performance, but since the CELL architecture is designed to combat this, and because of its well-documented lineage with BlueGene (courtesy of DeatmeatGA), I think it will fare better than x86 chips that were never designed for parallel processing. And let's also not forget that the bulk of processing will be graphics, which lends itself well to parallelism.
 
Entropy said:
This is where you are wrong.
But you are conditioned by the industry to feel as you do.
You are paying in size, noise, power consumption and money. That's a fairly large total price to pay, IMHO.

Granted, but these are areas were we've had wide swing for a while--optionsk, concessions, but not quite what I'd call "price to pay." Micro-ATX computers and laptop replacements have become a lot more popular of late (though the first and IMHO coolest shot at them--the Mac G4 Cube--got hosed... go figure), but it's not like the regular sized computers are a "burden" by nature--and they're not going to stop people from buying what the want. Meanwhile in consoles they spread from GameCube to Xbox, which is a pretty large size difference, but is shrugged off by most markets. (And even in Japan I question the relevance people give it regarding the Xbox.) Noise tends to be "fine at any level" as long as it's not intrusive, and since consoles are pretty much only in operation when playing games and other "noisy" activities, they have some leeway. ^_^ (The only time noise might have played a large factor in graphics cards was with the GFFX 5800, where it was way overboard. For most other cards people want the SPEED first, and may appreciate more quiet fans, but certainly don't want to make the trade-off for them to be completely silent--which they easily could be.) Power is even more transparent to the vast bulk of consumers, and unless you're heavily into intricate overclocking (which isn't done on consoles anyway) or grid computing, you're unlikely to care. ;) (The secondary effects will be the parts getting felt: Power --> Heat --> Necessary Cooling, which may make more noise.)

Money is, of course, the MOST noticable factor, but it's also the one I addressed in my previous post. If it doesn't cost THEM more money (and it may not) it shouldn't cost US more money. If it brings about added advantage which they believe deserves a premium, then it's up to us to decide if it's worth the premium--which is pretty much as we function with every other purchasing decision we make.

I wouldn't phrase it as "conditioned by the industry" as "this is the NATURE of the industry as it always has been." Some devices trade off speed/power for those very things, but they've never played a notable part in "standard PC" or console purchases, and won't for most people unless something goes so far as to be very bothersome.
 
Nondescript said:
But APUs have none of that - they are general-purpose silicon that can perform many tasks. (With, as Panajev said, a heavy emphasis on vector operations)

Exactly, the lowest common denominator can also be scaled based on the clock (consequently power and thermal bounds) and still maintain continuity with the network. The combination of general computing constructs (FPU/FXU*4) and inclusion of the absolute timer allows for scaling very easily - this is a non-issue.

DaveBaumann said:
Again, why are you talking specifically about IBM

Umm, have you not read a single thing I've wrote? How obvious is this? Seriously, it reduced to sheer logic:

Compare (A) to (B), (A) & (B) are composed of the following:

(A) - (a) + (b) + (c)
(B) - (d) + (b) + (e)

Hmm... Where do we start when we look for similarities which are intrinsic to a development cycle? It's a toughie alright.

development of an entire system does not start and stop with a bloody CPU Vince, there is much more work than just this to be done. We are also not necessarily talking about something that needs to be the scale of the cell because they already have IP that is fundamentally dedicated to 3D graphics processing.

Ahh yes, two things. First, Cell looks to be fundimantal to the PS3 - found in both the BE and VS ICs. Even if you can beat the VS with ATI's IC, you're still going to lose to the PS3 if you can't catch the BE. As I've been stating, unlike the XBox Next with looks to be a PC-esque legacy device with segmented computational banks (CPU, GPU), the patent would seem to state that the PS3 is a single Cell platform, without such arbitrary bounds.

Second, we've already talked about this aswell. The future is in Shading and this will bound future preformance, not the ATI IP/patented constructs which relate in large part to sampling, filtering and such. With Shading's prevelence, I think it's going to be funny when we see how ATI's "dedicated 3D graphics processing" stacks up against Cell's APUs.

You’ve never heard of a little business concept called “man hours�

Ahh yes, since all designs can be reduced to sheerly concurrent work. Group one will start laying out the specs of IC (A) while group two will fix an error in a metal layer in the 3rd revision of IC (A) :rolleyes:

Hey Dave, what is the average development cycle of a contemporary CPU? I'm guessing you won't answer and will just talk about how there is also a "GPU" - but this isn't an answer as STIs solution would appear to encompass the CPU and GPU, which would reflect more investment.

Vince your statement was “STIs basically shown that leaks happen when you deal with IBM and a project of said scale.†– this doesn’t mean anything in relation to XBox because we don’t know what levels of dealings MS have had with IBM yet and they (at this point) may not be the primary technology partner. The point I’m making is that just because leaks have occurred with IBM doesn’t necessarily mean that would be the case with all companies.

Exactly Dave, dev cycles aren't consistent across comapnies - which is why I'm using IBM (singularly) as microcosm. Yet, you seem to have had an Eastern Series preformed on yourself because your not integrating these concepts - because you just asked me, "Why.. IBM"?

And then I wonder why you keep arguing *shakes head*. As for IBM's involvement, it falls under IBM's Technology Group Which provides commonality with STI as both utilize the Extreme Blue program - this is as close as you can come to a parallel.

Yes, known as in probable, as in SEC mandated, as in legal, as in temporally possible.

Again, where are the SEC declarations for the development funds MS have put in to the companies already known to be working on XBox2? I’ve not seen any yet.

Call MS Investor Relations. Again, since you're not comprehending this - did they make a public announcement that an agreement has been reached? Why do you think they did this?

Internal Dev? So, Microsoft can do front-end MPU design now? I can draw out an architecture with a crayon too. Only it won't take me two years.

Aren’t you doing some assumption here Vince? We know they have the web TV team in that unit and we don’t know what they have been working on. I'm not suggesting that they have been doing this, but we just don't know what types of internal desing they have done.

I also assume you think that development doesn’t start and stop with the hardware design there area OS’s, API’s and development SDK’s to work on as well, there’s also the research into hardware requirements generation etc.

Right, I'm assuming so much comapred to you :rolleyes:

Lets see, I'm stating that we know how STI operated wrt IBM's involvement and disclosures. We also know that "According to Bernie Meyerson, IBM Fellow and chief technologist for IBM's Technology Group, the new Xbox technologies will be based on the latest in IBM's family of state-of-the-art processors." So, we can draw some parallels.

You, on the other hand, are stating that Microsoft has had a clandestine R&D program on the same scale as STI Cell, yet we've heard nothing about it, seen no disclosures at all - be these SEC mandated or those from within IBM as seen in STI.

So, you're assuming a chain of events that have NO proof other than the "You can't say it's impossible" defense. Classy.

Errr, Vince, I said it was known they went to one IHV in 2002 and that IHV wasn’t even the one that got it – we can assume that they went to the others at the same time periods, but that may not necessarily be the case.

Would that not be a competitive unfair practice to approach other IHV's sooner? Can't the IHV you know of state that if they were informed sooner, they could have won the contract? Think Dave, think.

Again, show me some proof! I want proff that shows a Cell-sized development cycle before the tenders went out in 2002.

We’ve already discussed several times the instance that highlights work was underway by development partners prior to the contract announcements going out, which is what I’m talking about.

We've already discussed how this is irrelevant to the initial topic, namely:

How does anyone know the level of investment that MS themselves are putting in and how much they a putting in to the companies that are building various other elements of the XBox2?

And we know this because we haven't seen it, and what he have seen hasn't been comparable to STI across several common metrics.

PS. IBM also has concurrent 65nm R&D with AMD and other companies in addition to their STI effort and internal IBM work. Just for your information of course.

In other words, they are using the development on 65nm processing on other applications not solely based on their venture with ST. Its also a potential that MS may get the benefit of that is it not?

Dave, what part of "Independant" are not not comprehending?

Up at IBM-SRDC, there is several concurrent R&D programs that are - get this - independantly researching advanced lithography. I realize you don't know (otherwise you wouldn't have made this comment) but IBM is developing 65nm itself, with STI, with AMD, with UMC/Infineon. As for STIs 65nm work, it was a technological tranfer (circa 2001 and then 2002) which is held by OTSS (Sony and Toshiba). So, to answer your question regarding Microsoft, the answer is NO. But, feel free to do your own research.
 
Back
Top