CONFIRMED: PS3 to use "Nvidia-based Graphics processor&

Status
Not open for further replies.
Jaws said:
V3 said:
This is just an analogy below,

CPU= 32 S|APUs (vertex shading)
GPU= 32 S|APUs (pixel shading)

CPU<=>GPU--->output

Replace 32 with whatever number you think but there's a 1:1 mapping. And call the S|APUs whatever depending on whether VS or PS units. And both CPU and GPU will be classed as CELL based as they should be able to execute Apulets (Software Cells). That's what I'm leaning towards...

APUs for fragment shading is over kill, fragment shading unit is cheap to implement in term of silicon estate and could be useful for post processing. You don't want to waste APUs for that.

Maybe what you want is to unify the rendering pipeline, shade everything in micropolygon style, instead of treating vertex and pixel seperately.

Not like the way ATI solution for Xbox next, where it is just for load balancing. And the rendering pipeline still similar to current solution.

I said it was an analogy! ;)

These PS S|APUs are specilized for the GPU. Not like your VS S|APUs on the CPU. What are the differences, well this is what I think nVidia is doing! ;)

They could even be Salc/Salps and do not even have to be 4way SIMD units or they could be other pixel engine type units from nVidia or Sony.

And I agree with your micro-polygons shading style which is why I think this is likely, :D

http://www.beyond3d.com/forum/viewtopic.php?t=18849

EDIT: I'll coin these VS|APUs and PS|APUs for vertex and pixel units so,

CPU= 32 VS|APUs
GPU= 32 PS|APUs

CPU<=>GPU--->output

Both CPU and GPU can work on 32bit data and exchange both ways.
 
What do you think about the 18 months comment? Do you think perhaps that's just how much time they had been working on their next generation gpu anyway, and they can use it as a convenient "been working on this with sony for 18mo" excuse?

Partially, yes, that probably is how long their own architecture has been underway, but I'd also say that lines of communication have been open for as long if not longer; I'm sure that NVIDIA would been very keen to have their hand in there even if it wasn't at a hardware level - the idea of being frozen out of all the next generation consoles would not be a good one as this is where a significant quantity of development, so of which will end up on the PC, will be done.

I'm just curious why you seem so convinced they missed their performance target. Just from the general information available, that seems like one possible explanation for going with nVidia, but certainly not the only one.

It may not necessarily solely be their performance target, but the targets for what they could achieve with their prefferred solutionin terms of general capabilities for graphics use - NVIDIA may have made a very convincing argument that it may not be achievable that route and going their route would be far safer (re: the post about development trends towards fragment shading).

(BTW - something that may have helped NVIDIA's relationship, and given them a decent line in, with Sony is they hired one of Sony Computer Entertainments Del Rel managers a few years back. When Chris Donnelly left as NVIDIA's head of Developer Relations (only to crop up at MS later on) NVIDIA back filled that postition with the guy from Sony)
 
Jaws said:
V3 said:
This is just an analogy below,

CPU= 32 S|APUs (vertex shading)
GPU= 32 S|APUs (pixel shading)

CPU<=>GPU--->output

Replace 32 with whatever number you think but there's a 1:1 mapping. And call the S|APUs whatever depending on whether VS or PS units. And both CPU and GPU will be classed as CELL based as they should be able to execute Apulets (Software Cells). That's what I'm leaning towards...

APUs for fragment shading is over kill, fragment shading unit is cheap to implement in term of silicon estate and could be useful for post processing. You don't want to waste APUs for that.

Maybe what you want is to unify the rendering pipeline, shade everything in micropolygon style, instead of treating vertex and pixel seperately.

Not like the way ATI solution for Xbox next, where it is just for load balancing. And the rendering pipeline still similar to current solution.

I said it was an analogy! ;)

These PS S|APUs are specilized for the GPU. Not like your VS S|APUs on the CPU. What are the differences, well this is what I think nVidia is doing! ;)

They could even be Salc/Salps and do not even have to be 4way SIMD units or they could be other pixel engine type units from nVidia or Sony.

And I agree with your micro-polygons shading style which is why I think this is likely, :D

http://www.beyond3d.com/forum/viewtopic.php?t=18849

Jaws... in the Visualizer those APUs were supposed to be APUs... just it... they were supposed to be able to share work-load dynamically with the CPU: that was the major point in making the GPU CELL based besides re-using technology they have been working on already.

If the GPU was to be CELL based (maybe Sony's own solution, what Dave talks about as choice 1) then the APUs in the GPU at best would have been "enahnced" for graphics processing, but fully compatible with the APU ISA the CPU's APUs were using.

Using a non-CELL co-processor with a CELL chip is no new idea... Tposhiba already announced the intention of doing the same with CELL and their own MeP architecture.

The other solution, that was racing against nVIDIA's one, was meant to carry only the Pixel Shading load with all the Vertex Shading done on the CPU which does not make a big stretch to see how it could not dynamically share work with the CELL based CPU as in itself it was not CELL based (call it CELL-G or CELL for graphics if you think that this PS only GPU was CELL based).

The Visualizer was supposed to be CELL based, but I think that the three-way race (completely internal Visualizer vs "partner" GPU vs nVIDIA GPU) moved in a two-way race ("partner" vs nVIDIA) not too long ago and now finally a winner was chosen after a bit of back and forth.

It is not the end of the world for the GPU not to be CELL based: it can still be designed as a scalable and modular design (R&D efforts re-use across several markets is not something that is interesting only to the CELL joint-venture partners).
 
Could be that the deal was more about software than hardware.

Getting NV know-how on writing a proper driver API, and getting the shader compiler tools could help boost development on next gen. hardware, especially when faced with MS's XNA.

Cheers
Gubbi
 
Gubbi said:
Could be that the deal was more about software than hardware.

Getting NV know-how on writing a proper driver API, and getting the shader compiler tools could help boost development on next gen. hardware, especially when faced with MS's XNA.

Cheers
Gubbi

They specified they are working on hardware based on their next gen GPU.
And if that wasn't enough, why would Sony employ someone else to get to know their own product and write libraries for them? NVIDIA would have to spend a lot of time trying to figure out the in's and out's of the Cell Architecture (should the GPU be totally Cell based), and then more time to write libraries to take full advantage of it.
Sounds a bit much. And i'm not sure why Nvidia would do that without any kind of input on how the hardware should work...
 
Gubbi said:
Could be that the deal was more about software than hardware.

Getting NV know-how on writing a proper driver API, and getting the shader compiler tools could help boost development on next gen. hardware, especially when faced with MS's XNA.

Cheers
Gubbi

Gubbi, nVidia stated how PlayStation 3's GPU will be a custom version of their next-generation PC GPU.
 
Panajev2001a said:
Jaws said:
V3 said:
This is just an analogy below,

CPU= 32 S|APUs (vertex shading)
GPU= 32 S|APUs (pixel shading)

CPU<=>GPU--->output

Replace 32 with whatever number you think but there's a 1:1 mapping. And call the S|APUs whatever depending on whether VS or PS units. And both CPU and GPU will be classed as CELL based as they should be able to execute Apulets (Software Cells). That's what I'm leaning towards...

APUs for fragment shading is over kill, fragment shading unit is cheap to implement in term of silicon estate and could be useful for post processing. You don't want to waste APUs for that.

Maybe what you want is to unify the rendering pipeline, shade everything in micropolygon style, instead of treating vertex and pixel seperately.

Not like the way ATI solution for Xbox next, where it is just for load balancing. And the rendering pipeline still similar to current solution.

I said it was an analogy! ;)

These PS S|APUs are specilized for the GPU. Not like your VS S|APUs on the CPU. What are the differences, well this is what I think nVidia is doing! ;)

They could even be Salc/Salps and do not even have to be 4way SIMD units or they could be other pixel engine type units from nVidia or Sony.

And I agree with your micro-polygons shading style which is why I think this is likely, :D

http://www.beyond3d.com/forum/viewtopic.php?t=18849

Jaws... in the Visualizer those APUs were supposed to be APUs... just it... they were supposed to be able to share work-load dynamically with the CPU: that was the major point in making the GPU CELL based besides re-using technology they have been working on already.

If the GPU was to be CELL based (maybe Sony's own solution, what Dave talks about as choice 1) then the APUs in the GPU at best would have been "enahnced" for graphics processing, but fully compatible with the APU ISA the CPU's APUs were using.

Using a non-CELL co-processor with a CELL chip is no new idea... Tposhiba already announced the intention of doing the same with CELL and their own MeP architecture.

The other solution, that was racing against nVIDIA's one, was meant to carry only the Pixel Shading load with all the Vertex Shading done on the CPU which does not make a big stretch to see how it could not dynamically share work with the CELL based CPU as in itself it was not CELL based (call it CELL-G or CELL for graphics if you think that this PS only GPU was CELL based).

The Visualizer was supposed to be CELL based, but I think that the three-way race (completely internal Visualizer vs "partner" GPU vs nVIDIA GPU) moved in a two-way race ("partner" vs nVIDIA) not too long ago and now finally a winner was chosen after a bit of back and forth.

It is not the end of the world for the GPU not to be CELL based: it can still be designed as a scalable and modular design (R&D efforts re-use across several markets is not something that is interesting only to the CELL joint-venture partners).

I'm not taking the patent word for word here. Personally I not bothered whether it's CELL based or not. But just because they have nVidia doesn't mean that there goal of a CELL based GPU has changed. ;)

The Visualizer as the patent shows has APUs and PixelEngines. All I'm saying is that those units have been changed to only say PS|APUs. Which are basically pixelengines and this is what nVidia is designing.

Whether the definition of a GPU is CELL based or not, I'm basing that on whether they can run 'software cells'. I'll refrain from using Apulets because they imply only bogstandard 'APUs' from the original patent can run them. Now how are they gonna do that,... dunno, this is what nVidia are there...

And

VS|APUs <=> PS|APUs---> output

is based on that Peter Hofstee's gamespot CELL video

I think were agreeing but disagreeing on that video.

Now whether these two types of execution units can run 'software cells' or not is the question that determines if they meet their goal of a fully CELL based system and not if it's the Visualizer diagram from the patents that's implemented. :)
 
I thought raytracing was not meant only to sovle problems with shadows but to solve problems with reflection and refraction as well. :?
 
Xenus said:
I thought raytracing was not meant only to sovle problems with shadows but to solve problems with reflection and refraction as well. :?

That would be correct. Very expensive performance-wise though. It's not that it solves problems with reflections and refractions, it's that it does them properly.
 
Now assuming that nVIDIA coming in early 2003 or somewhere around then, you can check the filing date of graphics related patents from Sony/SCEI.
 
one said:
Now assuming that nVIDIA coming in early 2003 or somewhere around then, you can check the filing date of graphics related patents from Sony/SCEI.
If you check nvidia patents you can find interesting things ;)
 
nAo said:
one said:
Now assuming that nVIDIA coming in early 2003 or somewhere around then, you can check the filing date of graphics related patents from Sony/SCEI.
If you check nvidia patents you can find interesting things ;)

Heh yeah, u don't see hoovers with video cards attached to them very often... :devilish: j/k

(notice how nAo woke up just by mentioning patent... I'm telling u, he's jacked in, he can see the patterns in the Matrix)
 
london-boy said:
(notice how nAo woke up just by mentioning patent... I'm telling u, he's jacked in, he can see the patterns in the Matrix)

Then he must have been in my house yesterday as there was a serious hickup in the matrix there ... my PowerBook growed a dead pixel, my digital TV receiver stopped working and a light bulb as well, all within about four hours.

Fredi
 
McFly said:
london-boy said:
(notice how nAo woke up just by mentioning patent... I'm telling u, he's jacked in, he can see the patterns in the Matrix)

Then he must have been in my house yesterday as there was a serious hickup in the matrix there ... my PowerBook growed a dead pixel, my digital TV receiver stopped working and a light bulb as well, all within about four hours.

Fredi

Must have been the Illuminati.
 
Jaws: Apulet = Software cell.

The Visualizer was supposed to use APUs (the same APUs as the ones in the CPU, the Broadband Engine).

The 3D graphics acceleration for texture sampling, AA, etc... was to be provided in the "Pixel Engines" part of the Visualizer.

The internal solution I agree was quite surely based on the Visualizer model, but it was not the solution chosen by SCE's management even before nVIDIA won the contract (the chirping of the birds indicated that the preferred solution was coming from this "partner" which I doubt it was CELL based).

Whether the definition of a GPU is CELL based or not, I'm basing that on whether they can run 'software cells'.

Which would imply possibly dynamic partitioning of the workload between the CPU and the GPU of the system.

Hofstee's presentation mentioned present, relatively short-term future (PlayStation 3 and the CELL WorkStation basically) and longer-term future.

He mentioned moving away from texture maps completely which might seem a bit radical if you look at it under the whole new approach to graphics kind of way... REYES/PRman still use textures all over the place (over 30% of rendering time is spent in I/O and a lot of that time you are fetching and sampling textures).

What he said could be seen under a more realistic light though: he mentioned the old-old way "just take a simple model and paste a texture on it to fake detail" and when he says that he wants to move away from that he is not saying that Normal maps, Displacement maps, "regular" texture maps, etc... are not going to be used any-longer (as I said you do not do that even in REYES based off-line rendering systems).

He meant it more in the following way: CELL will allow us to increase even more the pure geoemtric detail and to make a realistic/physics based use f it for animation, lighting, etc...

Doing things the hard way, subdividing a purely flat surface down to tons of micro-polygons per pixel when a couple hundreds or more triangles, a texture map and per-pixel lighting can achieve the SAME effect (in some cases it will still happen) is not a sign of genius... it is a sign that you could have saved processing time taking the short-cut in that occasion and used that time to subdivide opther surfaces better, add more lights or enhance the scene some other way if possible. If you can afford dice everything down to micro-polygons, but I am not sure if we are at that point in real-time graphics where no short-cut is not worth being pursued at all (off-line CG has the advantage in over-all frame-time... a flexible amount of hours versus a fixed amount of milli-seconds).

I think that over-all SCE's management has chosen what they believed to be the best solution for their PlayStation 3 strategy which still remains as PSOne was, as PlayStation 2 has been and as PSP seems to be... a very aggressive yet well planned out strategy.
 
Panajev2001a said:
He mentioned moving away from texture maps completely which might seem a bit radical if you look at it under the whole new approach to graphics kind of way... REYES/PRman still use textures all over the place (over 30% of rendering time is spent in I/O and a lot of that time you are fetching and sampling textures).

Maybe he was talking about moving away from regular textures to completly procedural textures? Would that make sense in this context?

Fredi
 
McFly said:
Panajev2001a said:
He mentioned moving away from texture maps completely which might seem a bit radical if you look at it under the whole new approach to graphics kind of way... REYES/PRman still use textures all over the place (over 30% of rendering time is spent in I/O and a lot of that time you are fetching and sampling textures).

Maybe he was talking about moving away from regular textures to completly procedural textures? Would that make sense in this context?

No. IMHO as those are not a full answer to the texture issue: we cannot do it all procedurally.
 
maybe a better one than the NV47 thats 24 pixel pipelines. Dual gpu 32 bit pipelines or maybe they have a dual core lets gets excited.
 
Pana,

I think when Hofstee said this in the same gamespot article, it said it all from an understated IBM guy,

"We've created something that is very flexible," he said. "Having a more generic architecture will allow people to do new things."

:devilish:
 
Status
Not open for further replies.
Back
Top