New Cell patents from IBM's Gschwind: the software side :).

Status
Not open for further replies.
:LOL: :LOL:


Anyway, I hope the nest gen will be capibill of stuff like that. It'll be interesting to find out what little tricks lightning wise and whatnot we'll see in the next 6 or so years. ;)
 
I hope I'm not ostrasized on these boards for saying so, but Deadmeat's posts do serve a purpose at times. Most everyone here is excited & are venerating the CELL tech & architecture to a god-like status. (I agree, the tech. patents are indeed very interesting & innovative) But no one is seeing the potential problems of efficiently programming for it. Or its possible innate complexity. Yes, most of his posts are motivated by a twisted Sony hatred, many times without any type of factual basis. But he illustrates some valid points many developers will soon have to grapple with.
 
Li Mu Bai, he has been baned for a reason and after repeated warnings. Yet, he keeps coming back. Last time the crap like this started happening, B3D mods shut the whole console forum down, and I'm thinking they will easily do it again - for good this time - if their rules are not being respected.
 
marconelly! said:
Li Mu Bai, he has been baned for a reason and after repeated warnings. Yet, he keeps coming back. Last time the crap like this started happening, B3D mods shut the whole console forum down, and I'm thinking they will easily do it again - for good this time - if their rules are not being respected.

I agree marco, he is definitely too incessant with his seemingly one man holy crusade to prove CELL's failure from an architectural & programming standpoint. On this he needs to demonstrate some self-control & respect forum rules. But criticisms of CELL seem to only be coming from him, & at times he is voicing valid programming concerns. Regardless, he has gone quite overboard regarding this subject.
 
Li Mu Bai said:
marconelly! said:
Li Mu Bai, he has been baned for a reason and after repeated warnings. Yet, he keeps coming back. Last time the crap like this started happening, B3D mods shut the whole console forum down, and I'm thinking they will easily do it again - for good this time - if their rules are not being respected.

I agree marco, he is definitely too incessant with his seemingly one man holy crusade to prove CELL's failure from an architectural & programming standpoint. On this he needs to demonstrate some self-control & respect forum rules. But criticisms of CELL seem to only be coming from him, & at times he is voicing valid programming concerns. Regardless, he has gone quite overboard regarding this subject.

I think part of the problem is that he's so confrontational and vocal with his view, that a lot of people don't voice similar opinions for fear of association.

I know personally I wouldn't post anything I thought he could misrepresent in one of his overzealous articles.

FWIW I think he's disruptive to the forums and should stay banned.

Having said that if you banned everyone I thought that of, there'd be a lot less people posting in this forum. :)
 
ERP said:
Li Mu Bai said:
marconelly! said:
Li Mu Bai, he has been baned for a reason and after repeated warnings. Yet, he keeps coming back. Last time the crap like this started happening, B3D mods shut the whole console forum down, and I'm thinking they will easily do it again - for good this time - if their rules are not being respected.

I agree marco, he is definitely too incessant with his seemingly one man holy crusade to prove CELL's failure from an architectural & programming standpoint. On this he needs to demonstrate some self-control & respect forum rules. But criticisms of CELL seem to only be coming from him, & at times he is voicing valid programming concerns. Regardless, he has gone quite overboard regarding this subject.

I think part of the problem is that he's so confrontational and vocal with his view, that a lot of people don't voice similar opinions for fear of association.

I know personally I wouldn't post anything I thought he could misrepresent in one of his overzealous articles.

FWIW I think he's disruptive to the forums and should stay banned.

Having said that if you banned everyone I thought that of, there'd be a lot less people posting in this forum. :)

But ERP, it's all within the wording. You don't have to be confrontational to be critical. I would much rather hear your concerns/critiques on possible programming complexity issues than DM's any day. Of course I respect you, & DM is a borderline fanatic, but I am interested in hearing both sides of the proverbial coin. And not simply the accolades. If I begin a well organized thread, will you contribute your tech. expertise?
 
IST said:
Didn't Guden start that joke?

I don't think I did. I said something about him being an AI construct or such as a joke once, but I didn't imply him being some Skynet-like neural network spread throughout the web. ;)

Damn, that's a scary idea! There might even be a piece of Deadmeat right here in my own box! I better put a lock on the power tools cupboard in case they come jumping out at me wanting to kill me! :p
 
However, this is also a double edged sword since it does also restricts the scalability of CELL applications within the scope of the performance of the CPU that they run on. In other word, CELL applications are not going to scale to the order of million processors. But I accept that as a perfect trade for a console processor.

The scalability issue you refer would be related to the number of APUs vs the number of PUs or to say it in other words you would not be able to add too many APUs to a PE before having to seriously power up the PU that co-ordinates them.

That can be taken care of by enhancing the PU ( clock-speed, better caches, etc... ), but can also be approached by limiting the number of APUs per PE and adding more PEs to the system.
 
But criticisms of CELL seem to only be coming from him

Li, you're giving DM too much credit and not enough to others here who certainly *have* posted concerns and criticisms with regard to the Cell architecture. They just may not have voiced every concern as loudly and brashly as DM. DM acts like just about every fault he "finds" is a dealbreaker for the Cell project to ever achieve its goals, while others post concerns that fit the tenor of the conversation better. Let's not forget a sense of perspective here: these conversations about "CELL" are not based on officially published documentation representing final architecture specs and at least one example of a CELL chip in production. Everyone here is speculating based on a series of patents that logically seem linked to the project but with no confirmation of that fact, with some additional leads and hints pulled from interviews.

I'm not really sure what place *criticism* has in a speculative conversation anyway. You're talking about unfinished technologies and unless you're actually involved in the finalization of those technologies, what point is there to criticizing something that's incomplete, with only vague knowledge of how complete it is and what current state its in? I would distinguish between voicing concerns and voicing criticism here. The former belongs in a speculative conversation, I think, and there's been plenty from multiple parties, but the latter does nothing more than reveal an agenda.
 
Panajev said:
Can you say that the patent implied something that goes against software rasterization ?
Well they make a very brief mention of pixel pipelines. Given the vague nature of the entire patent I doubt it means anything though.
However, it seems pretty clear that whatever the rasterizing scheme may be - it is expected to use Z-buffer. For one, layering pretty much requires Z to work properly, and they also have a pretty detailed talk about hierarchical Z-buffer in the patent.
 
Fafalada said:
Panajev said:
Can you say that the patent implied something that goes against software rasterization ?
Well they make a very brief mention of pixel pipelines. Given the vague nature of the entire patent I doubt it means anything though.
However, it seems pretty clear that whatever the rasterizing scheme may be - it is expected to use Z-buffer. For one, layering pretty much requires Z to work properly, and they also have a pretty detailed talk about hierarchical Z-buffer in the patent.

Yes, they want Z-merge, but even though they focus on Z-bufferign they do nto exclude other approaches ( at least for Hidden surface elimination ):

[0072] The parallel rendering engine 122 divides the entire storage region of the image memory 126 into multiple regions based on multi-path command from the main CPU 110. Using these multiple divided storage regions, it implements rendering of a three-dimensional object by subspace in parallel and generates divided images in the respective subspaces. Further, the parallel rendering engine 122 does Z-merge processing of each divided image rendered in parallel to consolidate image data in one of the divided regions. Thus, it generates one consolidated image. Z-buffer method, scan line method, ray tracing method, and such are used during rendering to remove hidden lines and hidden surfaces.
 
I know, I've read the patent.
Many of those don't fit all that well with the concept of pixel pipelines though, which is also mentioned, with a picture even.
 
Fafalada said:
I know, I've read the patent.

I know, I was not trying to school ya Fafalada ;).

I do not mean to sound arrogant :(.

Many of those don't fit all that well with the concept of pixel pipelines though, which is also mentioned, with a picture even.

Why do not they fit all that well ( I am sorry for this basic question, but with the idea I have of what they mean by pixel pipelines [read ahead ;)] it does not seem impossible

What if the idea they give in the patent of Pixel Pipelines, as far as that patent goes, is something like # of APUs ( depending on the configuration ) + the Pixel Engine ( which would be fast, but quite stream-lined ) ?

Visualizer's APUs might run Shaders and take care of the Geometry set-up process if needed.

Maybe I am not looking at the right picture and focusing on an implementation instead of the concept you are presenting.

What do you think is going on ?
 
Panajev said:
What if the idea they give in the patent of Pixel Pipelines, as far as that patent goes, is something like # of APUs ( depending on the configuration ) + the Pixel Engine ( which would be fast, but quite stream-lined )?
Visualizer's APUs might run Shaders and take care of the Geometry set-up process if needed.
So what you are proposing is some kind of "generalized" pipeline where there is no fixed primitive setup and the only thing that would really stay in fixed hw are texture fetches&filtering and maybe pixel tests&writes?
So that way you could write your own rasterization and no longer stay limited to just triangle primitives.

The question you should ask is whether it's really worth it. In this scenario you end up spending massive amounts of general purpose FPU power just in order to put things on screen, instead of using it to make the picture look better.
If the hardware will be so powerfull that the primitive setup and rasterization will only take a tiny % of total general computing resources then sure it would be great, but I don't really buy that just yet.
 
simple question:

if a developer wanted to do this...

could the APUs and whatnot in Visualizer do all geometry & lighting / T&L / Vertex Shading ?
(basicly the polygon processing that Real3D in Model 3 arcade did, that Flipper in Gamecube does, that NV2A in Xbox does, that ELAN in NAOMI 2 does and that GS in PS2 does *not* do)

therefore relieving the Broadband Engine to focus on just non-graphical duties.

obviously without the Broadband Engine doing geometry & lighting computations / calculations, we'd have lower-geometry games, because of course BE will have a higher flop rating than Visualizer for several reasons, # of APUs for one. I just wanted to know if Visualizer could be concidered a full graphics processor that could do polygon and lighting processing if needed, on its own.

I think the answer to my question is 'yes' but I'm interested in seeing what responses I get. :eek:
 
If Visualizer will have APUs, then the answer is of course yes.

But anything else is just an assumption on our part right now though - both whether the Visualizer will actually be Cell based, as well as what kind of FP rating it will actually have relative to BE. Heck, one of the software patents suggests like the name isn't even final either :p
 
If both CPU and GPU in PS3 are cell-based, it's very likely the CPU will have a more powerful (programmable) flops rating. Assuming there aren't any dedicated pixel shader-like computing elements in the GPU, one would think the chip relies on its APUs to supply that. Therefore, it might be more realistic to think T&L will be handled by the CPU instead.

After all, if complex surfaces and such are used it has to generate reams of geometry anyway, add to that, deformable models, physics calculations like wind etc... The CPU will likely have touched most every vertice anyway, so why not T&L them as well? :) If the CPU is anywhere near the 1Tflop figure in theoretical performance, it will have oodles of power to spare anyway.
 
Status
Not open for further replies.
Back
Top