PlayStation 3 GPU: NV40 and NV50 Hybrid with XDR DRAM...

Sony Computer Entertainment Inc. first talked about its graphics processor for the next-generation console PlayStation 3 at International Solid-State Circuits Conference in early February, 2001. The chip, made using 0.18 micron process technology, could process 75 million of polygons per second, had a pixel fill-rate of 1.2 – 2.6 gigapixels per second s and contained 256Mb on-chip embedded DRAM.

At least they could get their typo or facts right.

While NVIDIA will have to implement an XDR-supporting memory controller into its PlayStation 3 GPU, it is yet unclear, whether the company licenses’ Rambus controller, or develops its own. In both cases the controller may be used for different applications developed by NVIDIA Corp., including consumer, graphics, desktop and networking.

I find it doubtful, since Cell will most likely house the XDR memory controller. But its something I am wondering too. Next year can't come soon enough.
 
xbitlabs's translation is erroneous as you guess :LOL:

The original article (not "report" as xbitlabs suggested, but more of a column with educated speculations + some industry sources) by Hiroshige Goto says nothing like it's hybrid of NV4X and NV5x, but I can guess where xbitlabs had it mistaken.

I'd like to translate the first section of the article as acculately as possible:

Easily speculatable Xbox 2 GPU

Sony Computer Entertainment (SCEI) announced that nVIDIA is in charge of the development of the graphics chip for the next generation PlayStation (PlayStation 3?). It's assumed that nVIDIA is rushing now toward the completion of this chip. But currently nothing is known about this nVIDIA PS3 media processor.

"The next GPU (for the PlayStation) is a custom GPU. It's different from a GPU for the PC. it's based on an architecture beyond GeForce 6(NV4x)."

nVIDIA President/CEO Jen-Hsun Huang told like above answering the question asking whether PS3's media processor is NV50-based or not.

Actually, nVIDIA is devoting its resources into PS3 by delaying NV50's schedule. It's certain that nVIDIA is pouring huge development resources into PS3 and it's probable that it (PS3's GPU) becomes innovative in terms of architecture. As for its generation, it comes just in between NV4X and NV5X.

In contrast with the PS3 media processor with which many things are uncertain, you can guess the GPU for the Xbox 2 more easily, since what Microsoft wants is a DirectX hardware. ATi develops R500 for the Xbox 2 and is going to release R520 for PC in 2005 Q2. Also from the codenames, those 2 chips seem to have something in common in the basic architecture.

You can estimate also the architecture for the Xbox 2 chip to some degree. As DirectX is at a standstill until the next-gen OS Longhorn, the architecture extension in the Xbox 2 won't go much further than the current DirectX9-generation chips. The support for DirectX 9 Shader Model 3 is a sure thing, and it's said that there are other extensions, but it seems not to have a radical extension.

So, basically he drags out his speculation from only 2 facts - the JHH comment and nVIDIA devoting some serious resources into the PS3. He says nothing like it "will use NVIDIA’s technologies found in the current NV40 generation of its own chips as well as numerous techniques developed for the next-generation part known under NV50 code-name." which xbitlabs suggested. According to his speculation, the PS3 media processor has very different architecture from the PC GPUs (NV4X and NV5X) and not based on the PC GPUs either. Only its development timeframe comes in between them as NV5X was delayed.

The continuing 3 sections of this article discuss the 3 things:

Section 2. Which part deals with geometry processing?

Goto speculates the media processor has geometry pipes too like the current nVIDIA architecture (Vertex Shader & Pixel Shader, not unified). It's better in performance per die area than a Cell-based GPU. It's logical to make 3D pipes in one-chip as in-chip processing capability can be pushed further than bus bandwidth, though in the case of the PS3 Redwood can mitigate this issue.

Section 3. SCEI heads for augmenting generic computing power

In the GDC 2002 the SCEI CTO explained that the developers before the PS2 wanted realtime graphics but after the PS2 they wanted performance for simulation. It means in the next generation the target is in higher CPU performance unlike the PS2 in which 75% of silicon was devoted to graphics processing. Programmable Shader in the media processor adds to the power of CPU in generic computing, too.

Section 4. Benefits for nVIDIA

SCEI has been taking the most aggressive architecture among the 3 console vendors and it means nVIDIA can gain the highest technical experience by the work with SCEI and can do some adventure not found in PC graphics development. nVIDIA can learn the implemetation of the Redwood and XDR-DRAM inrterface without risk which they have when they implemented it in PC GPU. Moreover, like the Xbox development brought nForce to nVIDIA, the collaboration with Sony will bring something back to nVIDIA.
 
DaveBaumann said:
GeForce 4/FX you mean.

No he def means 3/4 not 4/FX. Im sure there was nothing of the NV3x in the NV2A. If there was can you link me any info that says such things please?
 
DaveBaumann said:
Instead you think Sony would rather dictate that NVIDIA scrap all their previous work and go down a path that that they have no prior knowledge with? If Sony were to dicate a route then I would suggest they would have their own graphics processor in there; little point dicatating the graphics route to a partner you have brought in because of their expertise on the graphics path they have been following.

If Sony/Toshiba engineers try to implement some fixed/3D-dedicated features in silicon and then it turns out nVIDIA can do it with smaller die area, I think SCEI choose nVIDIA and still dictates the route. Why does Sony have to license needless parts of the GPU?
 
kyetech said:
DaveBaumann said:
GeForce 4/FX you mean.

No he def means 3/4 not 4/FX. Im sure there was nothing of the NV3x in the NV2A. If there was can you link me any info that says such things please?

Yep, that was also my idea of the NV2A. Just a NV20 with an additional vertex shader for improved polygon performance. That's it.
 
One said:
Goto speculates the media processor has geometry pipes too like the current nVIDIA architecture (Vertex Shader & Pixel Shader, not unified). It's better in performance per die area than a Cell-based GPU.
That's all fine and well, but when you already have a CPU part with ~250-300GFlops, tacking on extra vertex hardware on the GPU makes little sense, regardless of the area efficiency.
Particularly when you consider cumulative performance of the competing hardware is in the range of 300GFlops.
 
Without sounding like a broken record, I'll re-iterate this post from a SCE employee under NDA,

Hey sorry guys, i have not posted in so long since last time. But anyways i am sorry i couldn't say anything about this sweet deal until it was announced since i was under very tight NDA's. Anyways, now it's official, and i would like to say that this is a co-joint collaboration between sony in-house graphics technology and Nvidia technology. What this means is that this "custom-gpu" is a totally different architecture that is not based on an existing architecture from Nvidia.

More will come soon.

http://www.beyond3d.com/forum/viewtopic.php?p=427308#427308
 
london-boy said:
AndrewM said:
There's too much speculation and rumors about this - it's getting quite annoying.

No one's forcing you to read.

It had to be you, didnt it :)

It would be reasonable if people actually speculated on something plausible, not fanciful.
 
kyetech said:
DaveBaumann said:
GeForce 4/FX you mean.

No he def means 3/4 not 4/FX. Im sure there was nothing of the NV3x in the NV2A. If there was can you link me any info that says such things please?

According to a few of the dev's around here NV2A has double pumped Z and stencil, a feature that was introduced on the PC with NV30, not NV25. There is also this age old Allard quote (originally sourced from a Gamespot interview that appears not to be online anymore) that said "The second thing is the part that we have in the box is not just a super clocked NV20 but it actually draws some of the geometry features out of the NV30.", never heard what was drawn out of NV30 though (and IMO that probably actually relates to the Z and Stencil features more than geometry capabilities).

one said:
If Sony/Toshiba engineers try to implement some fixed/3D-dedicated features in silicon and then it turns out nVIDIA can do it with smaller die area, I think SCEI choose nVIDIA and still dictates the route. Why does Sony have to license needless parts of the GPU?

Given the composition of PS2 I think Sony have "fixed function" 3D hardware pretty well understood! This is the easy end of the pipeline and not really where NVIDIA are spending the majority of their R&D (although there will still be improvements to come).

Jaws said:
Anyways, now it's official, and i would like to say that this is a co-joint collaboration between sony in-house graphics technology and Nvidia technology. What this means is that this "custom-gpu" is a totally different architecture that is not based on an existing architecture from Nvidia.

NV2A was a joint collaboration between "NVIDIA and MS" and resulted in a "custom GPU" since MS defined the operating conditions in conjunction with NVIDIA and they created a part that wasn't utilised anywhere else. Alterations to the memory bus, host interface and ditching of the VGA engine (which also satisfies the "there's nothing windows about this" quote) would constitute a "custom GPU".
 
one said:
Section 3. SCEI heads for augmenting generic computing power

In the GDC 2002 the SCEI CTO explained that the developers before the PS2 wanted realtime graphics but after the PS2 they wanted performance for simulation. It means in the next generation the target is in higher CPU performance unlike the PS2 in which 75% of silicon was devoted to graphics processing. Programmable Shader in the media processor adds to the power of CPU in generic computing, too.

It looks that DX9 HW adds a lot of possibilitys to general computing power, but ATI with shader unification, Tensilica's Extensa processor tech ... looks more advanced to that.
Plus if the memory (L2) is shared between C/GPU that would help too
In XB2.

I am right ?
 
DaveBaumann said:
....
Jaws said:
....
Anyways, now it's official, and i would like to say that this is a co-joint collaboration between sony in-house graphics technology and Nvidia technology. What this means is that this "custom-gpu" is a totally different architecture that is not based on an existing architecture from Nvidia.
NV2A was a joint collaboration between "NVIDIA and MS" and resulted in a "custom GPU" since MS defined the operating conditions in conjunction with NVIDIA and they created a part that wasn't utilised anywhere else. Alterations to the memory bus, host interface and ditching of the VGA engine (which also satisfies the "there's nothing windows about this" quote) would constitute a "custom GPU".

That's true and would satisfy those comments but why omit the major part,

"totally different architecture that is not based on an existing architecture from Nvidia."

...of that quote! ;)

...and bearing in mind the 2006 release timeframe (likely released after the R500 rumoured Xe GPU), why would it be based on NV40 architecture and not the NV50 architecture? Which would satisfy those omissions! ;)
 
totally different architecture that is not based on an existing architecture from Nvidia.

Wouldn't that mean that it's not an NV40 derived core? ie. NV50

Or was that what you were trying to point out?
 
AndrewM said:
totally different architecture that is not based on an existing architecture from Nvidia.

Wouldn't that mean that it's not an NV40 derived core? ie. NV50

Or was that what you were trying to point out?

He is not saying what is , but what isnt.
 
pc999 said:
one said:
Section 3. SCEI heads for augmenting generic computing power

In the GDC 2002 the SCEI CTO explained that the developers before the PS2 wanted realtime graphics but after the PS2 they wanted performance for simulation. It means in the next generation the target is in higher CPU performance unlike the PS2 in which 75% of silicon was devoted to graphics processing. Programmable Shader in the media processor adds to the power of CPU in generic computing, too.

It looks that DX9 HW adds a lot of possibilitys to general computing power, but ATI with shader unification, Tensilica's Extensa processor tech ... looks more advanced to that.
Plus if the memory (L2) is shared between C/GPU that would help too
In XB2.

I am right ?

IIRC the nVIDIA CTO states shaders unified in hardware is inefficient for graphics processing. Also, it's said that using the current generation/DX9 GPUs for generic computing such as simulation is still too early now according to the GP2 workshop presentations.
 
Jaws said:
That's true and would satisfy those comments but why omit the major part,

"totally different architecture that is not based on an existing architecture from Nvidia."

...of that quote! ;)

Because I thought that was already a given seeing JHH has already come out and flat out said that its based on their next generation architecture, which is not a "currently existing" architecture.

...and bearing in mind the 2006 release timeframe (likely released after the R500 rumoured Xe GPU), why would it be based on NV40 architecture and not the NV50 architecture? Which would satisfy those omissions! ;)

Did I suggest that it was NV40 based?
 
london-boy said:
Fillrate is not only used for what you actually see on screen you know

Duh. :rolleyes: Still, have you stopped to think exactly how much 20Gpix/s is?

1080P res is just over 2Mpix. That's nearly 20.000 screen fills per second, or more than 160 fills per frame at 60 frames/sec. It's still 40 full-frame fills/frame with 4x supersampling. Can you realistically think of anything you want to do that needs those kind of fillrate figures? :oops: And remember, 20Gpix is counting LOW, as NV40 clocks faster than 400MHz in some variants.

3xNV40 performance is completely unrealistic. It's even more silly than expecting 1Tflop from PS3's CPU, because it's actually doable to make a chip with that performance level, as prototype cells clock at 4+ GHz already and there are viable uses for that level of performance in games for a variety of purposes, while you have no use whatsoever for pixel fills in the range of 20 thousand thousand thousand pixels/s on output devices that can realistically only display a fraction of that.

It's simply a prepostrous idea! Period. :)
 
Back
Top