will PS3's GPU be more modern than PS2's GS for its time?

Vince said:
DeanoC said:
You just needs lots and lots of contexts (and a seriously good memory system). You have to stop thinking like a CPU where 8 contexts would be considered alot.

Like what? 32 or 64 or ... ?

Depends on the memory latency you want to hide...

But they sound reasonable figures for a GPU.
 
MfA said:
I assume Cell's DMA engines will support scatter/gather. Do the patents back this up?
It seems so:
Method for asynchronous DMA command completion notification
[0019] With the flexibility of this approach, software can group DMA commands in order to manage them. For instance all commands for a particular "task" can be grouped into a single tag group. Alternatively, all DMA "get" commands can be placed in a group separate from an output group comprising all DMA "put" commands. In addition, hardware can provide additional command parallelism or ordering rules with respect to groups. The APU software can verify that a single group has completed, all groups have completed, or a specified set of groups have completed operations. In the current embodiment, tag group status is supplied by the APU reading a data channel, where each bit in the channel represents a tag group status. Bit 0 represents tag group 0 status, bit 1 tag group 1 status, and so on up to bit 31 for tag group 31. A 0 indicates the tag group is complete, a 1 in the corresponding position indicates the tag group has one or more outstanding commands not yet completed.

[0020] There are several variations on the above and a number of advantages associated with the different variations. In one embodiment, the DMA queue 135 can store up to 32 DMA commands. All DMA commands in the DMA queue 135 could have the same tag group number, they could all have different tag group numbers, or anything in between.

Auto prefetch is also supported:
DMA prefetch
The load access pattern generally contains information on the data being transferred. The load access pattern can be used to predict future data transfers and prefetch data to the SL1 cache before the MFCactually requests the data. When the MFC actually requests the data, the MFC does not have to go all the way back to the system memory 212 to retrieve the data. Instead, the MFCaccesses the SL1 cache to retrieve the data and transfer the data to the local store

So a CELL CPU should support scatter and gather (with an authomatic prefetch mechanism) DMA operations, up to 32 operations at the same time. DMA ops can be grouped and prioritized and the SPU can check for single o groups completitions.

ciao,
Marco
 
Panajev2001a said:
Jaws said:
Panajev2001a said:
....
With this said, I still say that IMHO the PlayStation 3 GPU is not CELL based, it does not have the SPUs/APUs.
....

It would seem to me a not bad idea to assign all the Vertex Shading work to the CELL based CPU:
....

To me the above two statements seem to contradict each other.. :?

Explain how.

Vertex Shading done by units like the SPUs/APUs or like the VUs in the EE can be done very effectively, while Pixel Shading might not (nVIDIA does not believe in using the same hardware for both and for now they might be right).

Splitting the VS and PS work between the CPU and the GPU is what some PS heavvy games on Xbox 2/Xenon will do: all the unified shading units would be dedicated to Pixel Shading work and Vertex Shading would then be done on the enhanced VMX units of the Xbox 2/Xenon's CPU.

I guess you've read the rest of my post for me not to explain it again! ;)

On a side note, what you've posted above would've been a natural question to ask after reading 'point 3' of my explanation which would lead back to 'point 1' again and imply the GPU should be CELL based. :p

Panajev2001a said:
1. Basically, the way I see it is this...if you're gonna have vertex shading on the CELL CPU, then they're gonna be VS CELL threads, (aka software Cells), no?

If Hofstee was talking about two-way comms between CPU<=>GPU, then these VS CELL threads should run on the GPU also, no?


Ideally that is what they would want, which is why I talked about the direction for CELL 2.0 for example. IBM, with CELL, sees a point in pushing towards the same direction ATI is pushing with unified shading hardware.



Unified graphics hardware != CELL.

By CELL 2.0, I presume you mean what CELL 1.0 should be from the patents? I.e. Run everything, all threads, scalar and vector, be it graphics or non-graphics threads in a distributed, broadband environment under a unified ISA?

IMHO, if they can't achieve that with CELL 1.0, then the architecture is just a glorified PS2 EE that uses distributed processing. The main addition would be for the VU equivalent S|APUs to work independantly. Don't get me wrong, this is still not bad, it's just a natural progression of tech. But it's not the vision laid out in the patents.


This is not the way nVIDIA sees things, not for theshort-to-medium term at least.


I disagree. I'm not sure what timescales you mean by short/medium term. But listening to that Huang interview/webcast with Morgan Stanley, he see's the graphics/media processor of the future as a programmable DSP. The CELL chip is basically a programmble DSP. The ATI R500 is basically a programmble DSP. I know on the VS/PS level ATI are looking at identical units but NV are differing on those being specialised. But this should not stop VS threads running on PS units and PS threads running on VS units.

In fact I see the NV approach capitalising on CELL and complemeting as mentioned in my other threads. If NV have two sets of shader units that are specilaised and optimised for their respective VS and PS parts, then for a future PCI-E PC's and PS3 IC's, we would see them employed ,

PC,

[CPU]<=>[VS<=>PS]

PS3,

[CELL]<=>[PS]


CELL Workstations perhaps,

[CELL<=>PS]

It will be interesting to see if they'll use Cg (very likely) and whether it will compile and run shaders on BOTH the CELL CPU and the NV GPU. It would make sense if they did. In my eyes, running Cg shaders would just become 'software renderring' executing on 'specilaised' SIMD hardware. :)
 
Jaws said:
It will be interesting to see if they'll use Cg (very likely) and whether it will compile and run shaders on BOTH the CELL CPU and the NV GPU. It would make sense if they did. In my eyes, running Cg shaders would just become 'software renderring' executing on 'specilaised' SIMD hardware. :)

SCEA was hiring engineers for new shader compiler development... where does it fit in?
 
one said:
Jaws said:
It will be interesting to see if they'll use Cg (very likely) and whether it will compile and run shaders on BOTH the CELL CPU and the NV GPU. It would make sense if they did. In my eyes, running Cg shaders would just become 'software renderring' executing on 'specilaised' SIMD hardware. :)

SCEA was hiring engineers for new shader compiler development... where does it fit in?

it fits in as in 'they just hired some' ; )
 
IMHO, if they can't achieve that with CELL 1.0, then the architecture is just a glorified PS2 EE that uses distributed processing. The main addition would be for the VU equivalent S|APUs to work independantly. Don't get me wrong, this is still not bad, it's just a natural progression of tech. But it's not the vision laid out in the patents.

Yes, CELL 2.0 being the full vision of the patents and more of course (the road does not stop there).

I am not saying that CELL 2.0 is a new ISA, a totally different road.

Think CELL 1.0 as being the implementation of CELL as you will see in the WorkStation and in PlayStation 3 and CELL 2.0 to be a later implementation.

Like Treebeard would say, don't be so hasty master Hobbit ;).

The vision laid out in the patents was natural progression of existing technologies and ideas that have been theorized 10-15 or even 20 years ago in laboratory environments.

I disagree with saying that Sony did not try to experiment how you could build an entire system made of CELL ICs and few sections of custom logic/small dedicated ICs, but Sony/SCE is in the business of making of PlayStation 3 and the CELL WorkStations the best possible products they can make, not proofs of concepts.

On this I agree with V3, Sony/SCE does not make money prooving the a concept lke "use CELL for everything" works in a high volume machine if this means monumental losses.

This does not mean the architecture is not good: wait to see CELL based products like hopefully some renderfarms.

Maybe some customer will ask IBM and Sony to deliver a new renderfarm and toolset chain or maybe Sony ImageWorks itself willl ask for a few million dollars worth of a system and with that kind of budget they might decide to push CELL far enough to allow for something like the full Broadband Engine and its Visualizer chip.

I believe we will see that once Sony/SCE starts PlayStation 4 R&D, just like we saw the GSCube-32 and GSCube-64 machines.


I know on the VS/PS level ATI are looking at identical units but NV are differing on those being specialised. But this should not stop VS threads running on PS units and PS threads running on VS units.

It does as it would be really inefficient: your example of PS threads running on VS units is exactly what nVIDIA says they do not want right now as the unit would not be used optimally at all.

PC,

[CPU]<=>[VS<=>PS]

PS3,

[CELL]<=>[PS]

In order to share data and wor together the CELL CPU and the nVIDIA GPU do not need to run the same code, run the same Apulets: they just need to agee on what kind of interfaces you use to communicate between the two. More like a "you know if I leave this quad-word here that it is a Vertex because blah+blah is written at memory location blah-blah-blah, etc...".
 
Would'nt Cell 1.0 be 90NM and 2.0 Cell on 65NM?

Some quotes in regards to Nvidia and Sony rumors from CNN in 2003...

The reality is nVidia is not sitting in a vacuum," said Erach Desai, an analyst with American Technology Research. "They are in discussions with Sony for the PS3."

They realize, I think, that they cannot do it all," said Desai. "I have checked with a couple of very seasoned executives ... and the strong impression is that there is interest from Sony and interest from nVidia."

"We've always said we'd be happy to be in any game console," nVidia spokesperson Carrie Cowan told me recently.

"I would probably characterize it as a less than 50 percent chance that they win PS3," said Michael McConnell of Pacific Crest Securities. "However, we have talked with Sony and their take on it is they're considering an external vendor as well as an internal solution. So you can't rule it out, but you definitely can't say it's a sure thing."[/b]
 
Mythos said:
Would'nt Cell 1.0 be 90NM and 2.0 Cell on 65NM?

Pana has been pre-gaming for tomorrow all week, don't pay attention to his rants on v1.0 and v2.0. :p

But yes, there are 90nm Cell processors and there will be 65nm fabricated ones as well. No, they don't coincide with his comments.
 
Vince said:
Mythos said:
Would'nt Cell 1.0 be 90NM and 2.0 Cell on 65NM?

Pana has been pre-gaming for tomorrow all week, don't pay attention to his rants on v1.0 and v2.0. :p

:lol

Vince, I did explain what I meant.

I do not think PlayStation 3 will fully go the way the first patent showed: namely a wholly CELL based system with a CELL based Visualizer.

One day we might see CELL based set-ups going the way ATI seems to be going: an extension of the Visualizer concept. A single IC doing both general purpose processing as well as graphics processing with a sea of computational blocks exchanging data with a seoarate sea of dedicated silicon blocks (TMUs, ROPs, etc...).
 
Panajev, well, I hope that Sony fully funds smaller companies like the one that makes the SAARC raytracing processors. for Playstation4. I'm hopeful since Sony was willing to go with Nvidia for the PS3 GPU or at least a very very significant portion of it.


PS2 CPU: Emotion Engine
PS3 CPU: Broadband Engine (?)

I hope PS4 CPU: RayTracing Engine 8)
 
Panajev2001a said:
I do not think PlayStation 3 will fully go the way the first patent showed: namely a wholly CELL based system with a CELL based Visualizer.

Are we only just coming to this realisation?
 
DaveBaumann said:
Panajev2001a said:
I do not think PlayStation 3 will fully go the way the first patent showed: namely a wholly CELL based system with a CELL based Visualizer.

Are we only just coming to this realisation?

No, I am repeating what I have been saying for quite a bit actually.

The GPU was not going to be CELL based like the Visualizer even when the other contractor had the contract and I do not expect this to be different with nVIDIA.
 
DaveBaumann said:
Panajev2001a said:
I do not think PlayStation 3 will fully go the way the first patent showed: namely a wholly CELL based system with a CELL based Visualizer.

Are we only just coming to this realisation?
:LOL:

That got me laughing. Some, who will remain nameless, were giddy as schoolboys in hopes of a revolutionary, CELL-based rasterizer. I don't think they wanted to give that up.
 
kaigai02l.gif


hmm?
 
Inane_Dork said:
DaveBaumann said:
Panajev2001a said:
I do not think PlayStation 3 will fully go the way the first patent showed: namely a wholly CELL based system with a CELL based Visualizer.

Are we only just coming to this realisation?
:LOL:

That got me laughing. Some, who will remain nameless, were giddy as schoolboys in hopes of a revolutionary, CELL-based rasterizer. I don't think they wanted to give that up.

Name me, I would have loved to see a REYES based set-up with a pure CELL based chipset :).

Would have it been cool ? Yes.

Is it the best thing for the PlayStation 3 taking timing, cost, etc... into consideration ? Probably not, hence the switch to the Toshiba solution and then to the nVIDIA solution.
 
Panajev2001a said:
...
Like Treebeard would say, don't be so hasty master Hobbit ;).

The vision laid out in the patents was natural progression of existing technologies and ideas that have been theorized 10-15 or even 20 years ago in laboratory environments.

It's not a matter of haste...they would've had 5yrs+ with access to immense resources! ;)

Anyway like you say it's been theorized for a while. IIRC, the TAOS operating system had these concepts of 'software Cells' for distributed graphics renderring and general parallel processing and in a heterogeneous environment, using an object oriented MPI programming model, compiled to a VM processor over TEN years ago...and it was a working model! :p

TAOS

They have now formed into 'TAO Group' who have, interestingly enough, a link with SONY. They have a Realtime OS and middleware platform for CE devices...and Sony are a partner who have invested millions into that group. Perhaps a link with PS3 :?:

http://tao-group.com/about/about.php

PS. Notice all the playstation game images on their site ! ;)

Panajev2001a said:
...
This does not mean the architecture is not good: wait to see CELL based products like hopefully some renderfarms.

I'd like to know how this NV-Sony collaboration will impact their DCC market with NV's Gelato and Sony's CELL workstations. NV sell Gelato as a distributed HW/SW renderring package optimised for their GPUs.

If there are gonna be NV GPUs in CELL workstations, then these GPUs must be able to provide similar distributed renderring, alongside the CELL chips. Otherwise if NV continue selling PC based Gelato workstations in the same market as Sony's CELL workstations, then they aren't leveraging their collective strengths in distributed processing/renderring. One's just using CPUs and the other it's GPUs and NOT BOTH to their full potential. This is another reason to get CPU + GPU to work together on 'software Cells' and also for NV to license CELL processors, IMHO.

Panajev2001a said:
I know on the VS/PS level ATI are looking at identical units but NV are differing on those being specialised. But this should not stop VS threads running on PS units and PS threads running on VS units.

It does as it would be really inefficient: your example of PS threads running on VS units is exactly what nVIDIA says they do not want right now as the unit would not be used optimally at all.

Yes, I know it will be inefficient but the point is to allow a degree of flexibility for VS or PS heavy graphics. There will be an optimun VS and PS load on the CPU+GPU chipsets. But a degree of freedom allowed either way. It will be interesting to know this inefficiency factor if NV indeed have these optimised VS and PS units for their new architecture. :)
 
one said:
Jaws said:
It will be interesting to see if they'll use Cg (very likely) and whether it will compile and run shaders on BOTH the CELL CPU and the NV GPU. It would make sense if they did. In my eyes, running Cg shaders would just become 'software renderring' executing on 'specilaised' SIMD hardware. :)

SCEA was hiring engineers for new shader compiler development... where does it fit in?

This was the Job Pana posted,

The successful candidate will develop a state-of-the-art shading language compiler for an advanced forth-generation graphics processing unit (GPU). With the assistance of other team members, the individual must be capable of designing and implementing the major components of the compiler backend.

To achieve this goal, the individual should have extensive recent experience with backend internal representations suitable for advanced code optimization. Detailed knowledge of modern code optimization techniques, register allocation, and code generation expertise is also required, as well as experience with programming language front ends, assemblers, linkers, and runtime libraries. Exposure to shading languages, such as nVidia CG, Microsoft HLSL, Brook, or StreamIt, and exposure to 3D graphics APIs, such as OpenGL and DirectX, is also desirable.

http://www.beyond3d.com/forum/viewtopic.php?p=423029#423029

I checked the link and the good news is that the vacancy has been filled!...pheeeww! :p

It mentions the major components of the compiler back-end being developed by this SCEA applicant. I would've thought NV would do most of that if they're supplying the GPU parts. :? ...still they could be extending it to the CELL CPU also...and/or there's custom Sony GPU parts involved too. :?:
 
Back
Top