PS3 GPU Analysis

Sorry but I'm confused about this threads facts/ rumours/speculation being blurred. The only facts we have are the joint Sony-nVidia press release last December. And maybe any direct quotes from Huang.

So in order to help things along how about agreeing on certain fundamental points, on a point by point basis as STRONG/ WEAK rumours/speculation and once agreing on all the STRONG speculation, then see what can be inferred from them?

I'll start with these points, Strong/Weak rumours,

1. Aggregate two-way CELL<=>GPU bus bandwidth, ~77 GB/s. This is divided into 12*8bit lanes. It's assymentric with a 7:5 ratio. STRONG.

2. The GPU will have eDRAM. STRONG.

3. Vertex work predominantly done on CELL/ SPE's. STRONG.

4. Deferred rendering. WEAK.

5. Total XDR RAM, 256MB. STRONG.

6. One CELL/8 SPEs will be used as CPU. STRONG.

Please add more key points and once everyone's in agreement then infer what we can from all the STRONG points? :)
 
Jaws said:
Sorry but I'm confused about this threads facts/ rumours/speculation being blurred. The only facts we have are the joint Sony-nVidia press release last December. And maybe any direct quotes from Huang.

So in order to help things along how about agreeing on certain fundamental points, on a point by point basis as STRONG/ WEAK rumours/speculation and once agreing on all the STRONG speculation, then see what can be inferred from them?

I'll start with these points, Strong/Weak rumours,

1. Aggregate two-way CELL<=>GPU bus bandwidth, ~77 GB/s. This is divided into 12*8bit lanes. It's assymentric with a 7:5 ratio. STRONG.

2. The GPU will have eDRAM. STRONG.

3. Vertex work predominantly done on CELL/ SPE's. STRONG.

4. Deferred rendering. WEAK.

5. Total XDR RAM, 256MB. STRONG.

6. One CELL/8 SPEs will be used as CPU. STRONG.

Please add more key points and once everyone's in agreement then infer what we can from all the STRONG points? :)


PS3 GPU should be capable of SM 4.0. if Nvidia had SM 3.0+ hardware in early to mid 2004, by late 2005 i would hope SM 4.0 capcability is there in PS3 GPU.
 
nAo said:
Jaws said:
2. The GPU will have eDRAM. STRONG.
Strong? :? is this insider info or just speculation?

Well, I said STRONG because the PS2's GS had it, the GC Flipper had it, the Xenon is rumoured to have it...and the recent poll I did agreed with it. But this is the whole point of the discussion, agreeing on these points before we can continue. So you think it's WEAK?
 
nAo said:
Jaws said:
So you think it's WEAK?
Yeah...Nvidia had to little to time to add edram to their design.
Even if I would like to be wrong on this..

I suspect something along the lines of NV's TurboCache tech being used instead? What of all the Sony-Tosh investment into eDRAM tech over the years?

Well, can we infer other things from this key issue... i.e. I'm under the impression (correct me if wrong) that in order to emulate PS2 successfully, you will need more than 48 GB/s bandwidth somewhere in the system because of the GS 48 GB/s? If it's not the actual EE+GS IC, then this bandwidth has to be available somewhere else in the system? If this isn't available then the EE+GS IC is a given?
 
Jaws said:
nAo said:
Jaws said:
So you think it's WEAK?
Yeah...Nvidia had to little to time to add edram to their design.
Even if I would like to be wrong on this..

I suspect something along the lines of NV's TurboCache tech being used instead? What of all the Sony-Tosh investment into eDRAM tech over the years?

Well, can we infer other things from this key issue... i.e. I'm under the impression (correct me if wrong) that in order to emulate PS2 successfully, you will need more than 48 GB/s bandwidth somewhere in the system because of the GS 48 GB/s? If it's not the actual EE+GS IC, then this bandwidth has to be available somewhere else in the system? If this isn't available then the EE+GS IC is a given?

Doesn't nVidia's TurboCache deal with accessing external memory and wouldn't any eDRAM would act as local memory? Just because the PS2 had eDRAM doesn't mean the PS3 will. They are different machines after all.
 
11363.gif


Above is without TurboCache.

11364.gif


Above is with TurboCache.

11365.gif


Above is TurboCache.

http://www.hardwareanalysis.com/content/article/1766/

TurboCache should work with or without eDRAM, i.e. local memories and System RAM, i.e. XDR.
 
nAo said:
Jaws said:
So you think it's WEAK?
Yeah...Nvidia had to little to time to add edram to their design.
Even if I would like to be wrong on this..

eDRAM also takes realestate away from other features. It seems reasonable to assume that the PS3 GPU will be 90nm. The NV40 was 222M transistors.

If you add a lot of eDRAM you get a lot of bandwidth, but bandwidth limited scenarios are usually high resolutions (16x12) with a lot of AA/AF enabled and fatures like HDR (though I would not put to much emphasis on the lack of performance FarCry shows with HDR... lets see what new HW with this feature seen as a basic task + games designed to use it from the ground up will perform before we assume everything takes a 50% hit in FPS). If the "max" most games will be running in is 720p, you have to wonder if it is worth cutting out some rendering power in favor of bandwidth. The 6600GT, with half the memory bandwidth of the NV40, runs fine when the resolution is backed down a little and only modest levels of AA/AF are applied.

Obviously the PS3 GPU is going to be powerful. If you convert realestate of the 6 Vertex Shader units to Pixel Shaders (i.e. assuming the SPEs will do vertex shading) and double the transistor account we very well could see 32 Shader Units with SM 3.0+ tech.

But if you begin adding eDRAM you increase the cost of the chip and reduce the rendering power of the chip. So if they can get away with XDR or even GDR3 it may be a better tradeoff than losing Shader Units for extra bandwidth.

I guess we will know more for sure in March. But if I had to look at certain tradeoffs, more shaders and more external memory seem like a better choice to me than eDRAM and less memory/shader units. But maybe Sony/nVidia have a unique solution that will allow the best of both worlds... XDR can have a BW of ~50GB/sec with 4 chips I believe. Maybe they will use 4x256mb or 8x128mb chips configuration for the GPU?
 
Acert93 said:
nAo said:
Jaws said:
So you think it's WEAK?
Yeah...Nvidia had to little to time to add edram to their design.
Even if I would like to be wrong on this..

eDRAM also takes realestate away from other features. It seems reasonable to assume that the PS3 GPU will be 90nm. The NV40 was 222M transistors.

If it's in 65nm (I think it's more reasonable guess seeing various news articles) and it's without Vertex Shader and PC-specific features, there's more than enough space for eDRAM.
 
one said:
If it's in 65nm (I think it's more reasonable guess seeing various news articles) and it's without Vertex Shader and PC-specific features, there's more than enough space for eDRAM.
The problem is not space, but time. NVIDIA deal with Sony was finalized too late..
 
one said:
If it's in 65nm (I think it's more reasonable guess seeing various news articles) and it's without Vertex Shader and PC-specific features, there's more than enough space for eDRAM.

I know that 65nm is the goal for the CELL. But 65nm gives Sony less than one year to get the GPU there. I think that may be asking a lot. First is the fact nVidia is just moving to 90nm, asking them to move to 65nm in less than a year (if PS3 is released in Japan in the Spring, meaning they have to have enough suppies stocked for a couple millions units by then) is a pretty big task. Throw in eDRAM, something nVidia has not worked with to my knowledge, on top of a new process is a lot to ask.

And this goes back to my original pontification--so what if it is 65nm? Is the transistor budget better spent on Shader Units and new tech or eDRAM? XDR is very fast, why not just have a GPU memory pool of XDR. That would seem a lot less risky than eDRAM which brings with it issues of production costs/yields, heat, die size, and so on.

Yeah, we all want big, fast, complex chips. But there is also cost involved ($300 console) and the fact they need to have millions of these things ready next year.

And if the last 2 GPU generations are an indication, complex chips on a new process have horrid availability. It was not until the 9800/9600 refresh that we saw good availability (and the NV40/R420 are still not as good as they should be and we are still waiting for their refreshes to kick in). So jumping from 90nm to 65nm with eDRAM has a lot of hurdles to overcome--in quantity--in 1 years time. It looks like they are going to try with CELL. Can they do it with a GPU too?
 
Jaws said:
the Xenon is rumoured to have it...
EDRAM is rumored to be present in Xenon, not eDRAM.
Of course, eDRAM, for Xenon, is not out of the question, but it never have been anything else but educated guesses for the moment. On the other hand, EDRAM is an actual rumor.

Uttar said:
My point just is we don't even have reliable R520 specs to compare it, so precise GPU performance information wouldn't do us any good. Of course it's gonna be faster than today's high-end PC GPUs, and of course it's going to be clocked at more than 500Mhz. But that doesn't tell you much, now does it? :)
No, indeed that doesn't tell much. :p

Seriously, Uttar, where's the insider tidbits that we want to hear and then discuss relentlessly for 20 pages?
I'm talking about "leaked performance datas" and "8 extremes pipelines"-class kind of info. :D

Now, seriously, this time I mean it, I remember that a few years ago you knew some general infos about the, at the time, future project NV50. Even if this "past NV50" is not the NV50 (or G70, whatever) we will see, would you mind tell us what was the originality of this architecture? Deffered Technology (For rendering or shaders)? Unified architecture? Something else?
 
nAo said:
one said:
If it's in 65nm (I think it's more reasonable guess seeing various news articles) and it's without Vertex Shader and PC-specific features, there's more than enough space for eDRAM.
The problem is not space, but time. NVIDIA deal with Sony was finalized too late..
Then why didn't they choose ATi instead for better integration of eDRAM? Why is it a licensing deal? Why do they use bulk-CMOS only for the GPU? ;)
 
Acert93 said:
one said:
If it's in 65nm (I think it's more reasonable guess seeing various news articles) and it's without Vertex Shader and PC-specific features, there's more than enough space for eDRAM.

I know that 65nm is the goal for the CELL. But 65nm gives Sony less than one year to get the GPU there. I think that may be asking a lot. First is the fact nVidia is just moving to 90nm, asking them to move to 65nm in less than a year (if PS3 is released in Japan in the Spring, meaning they have to have enough suppies stocked for a couple millions units by then) is a pretty big task. Throw in eDRAM, something nVidia has not worked with to my knowledge, on top of a new process is a lot to ask.

I've thought it's a licencing deal and most of the chip implementation will be done by Sony just like NEC did, with NEC's own technique for eDRAM, for ATi in the GameCube, no?

Acert93 said:
And if the last 2 GPU generations are an indication, complex chips on a new process have horrid availability.

So nVIDIA changed its fab from TSMC to IBM recently. :)
 
one said:
I've thought it's a licencing deal and most of the chip implementation will be done by Sony just like NEC did, with NEC's own technique for eDRAM, for ATi in the GameCube, no?

nVidia is still the primary designer of the chip and there are still time constraints. You are asking them to jump to a new process and new tech in a year. Without knowing more I think that is a lot to assume.

So nVIDIA changed its fab from TSMC to IBM recently. :)

If I am not mistaken (and it is almost 4am so I very well could be) I believe they began switching to IBM before the NV40 and promptly went back to IBM. IBM has had their own issues (e.g. G5 speeds) so switching to IBM does not seem like a one-size fits all band aid.

The PS3 GPU may very well be at 65nm with eDRAM, but without any official information or hints I think we should be careful getting ahead of ourselves. Setting high expectations that are not met is kinda like anti-Hype. e.g. There was a lot of speculation CELL would have eDRAM, but that also did not come to pass. If CELL is not having eDRAM, why would the GPU?

And since 65nm is a new process and you are talking about needing millions of units quickly, well, that is a huge hurdle. It is not about the tech alone, but availability. They have to get this out the door at a realistic price. And the quotes from the CELL release from some analysts talking about $500-$700 PS3s is not realistic. If they come in at that price they are DOA.
 
Acert93 said:
The PS3 GPU may very well be at 65nm with eDRAM, but without any official information or hints I think we should be careful getting ahead of ourselves. Setting high expectations that are not met is kinda like anti-Hype. e.g. There was a lot of speculation CELL would have eDRAM, but that also did not come to pass. If CELL is not having eDRAM, why would the GPU?

Well, Cell is without eDRAM, but each SPE has 256KB LS instead of 128KB in the patent, which is better in some cases. Also, using SOI to get a higher clock speed prevents incorporation of eDRAM, so it's natural in the 65nm process to go without eDRAM for Cell, but not for the GPU, as the 25.6GB/s shared bandwidth to XDR-DRAM sucks and should be complemented by some cache such as eDRAM, rather than getting a high clock speed.

Acert93 said:
And since 65nm is a new process and you are talking about needing millions of units quickly, well, that is a huge hurdle. It is not about the tech alone, but availability. They have to get this out the door at a realistic price. And the quotes from the CELL release from some analysts talking about $500-$700 PS3s is not realistic. If they come in at that price they are DOA.

Eh... isn't PSP/PSX's 90nm process new? I guess Intel will start to sell 65nm chips in 1Q 2006 CY.
 
One, eDram is not something you can just slap in there, you have to design an architecture around it! If NVIDIA next gen GPU use edram from the start then there are possibilities that we'll see an edram based GPU, otherwise I wouldn't bet on it ;)
 
Back
Top