nVidia building the PS3 GPU in its "entirety"

Status
Not open for further replies.
Sounds from this interview, PS3 is not entirely Sony Cell designed(why a break of 2?), and peecee Nvidia have a much larger say in the PS3 design. Right Vince, tuttle?

From this interview, I still can't tell how the NV GPU going to be paired with Cell, JH still very vague. Because this time NV GPU won't house the main memory controller, Cell will have that part instead.
 
Jov said:
jvd said:
Jov said:
Can we expect nV5x ~ nV4x SLI [+ new features + GHz on a core] putting it simply? Or are we expecting much much more?

ghz on core would 2.3x the clock speed of nvidia's currently top of the line process pushing low yielding gpu .

I think 1 ghz would be the most we would see from it if it launches 2006

2.3x the current top-of-the-line due to current fabbing and process size, but when fabbing at 65nm/SOI/Lo-K/etc... they might have more room to up the GHz.

If the Cell BE ~ 4.6+ GHz, then the GPU will likely be 1/4 (worst) ~ 1/2 (best) that speed.[/quote

So your telling me they are going to go from 400mhz to possible 2.3 ghz while making the gpu even bigger than it currently is all the while going from a drop of 110nm to 65nm .

Find it hard to believe.
 
I'm no engineer but I reckon that switching to Cell tech is more than a simple modification and would require more than 50 engineers. Surely none of you think that the PC part will be Cell based?

The PS3 GPU is obviously going to be a customized part in the same sense that NV2A was.
 
A very very small part of me tells me that the smaller involvement Sony has in the GPU, the better things will be. I know i shouldn't, but u know whent you have that little voice in your brain....

Still, i think they'll butt in big time, if only to make sure Cell and the GPU speak the same language.
 
regarding EDram: since Sony is goning to fab the gpu, nvidia is obviously designing it using their design libraries. Since integrating Dram into a logic IC is mainly a manufacturing challenge, i don't see how that should give nvidia a major sweat. Having two memory controllers (BE & GPU) with corresponding distinct external memory banks is neither space efficient (redundant data) nor cheap as it will significantly increase the costs for manufacturing the system board during the whole product lifecycle (guess that is why no one went that way this generation). BTW how are Nvidia's / ATI's / etc. approaches to designing GPUs x86 centric?
 
NVIDIA and ATI GPUs seem to be quite x86 independant, seeing how video cards for Macs are pretty much the same as the ones for PC.

So i guess it doesn't matter, to a certain extent, what CPU they are coupled with, as long as the software side of things works properly.

Will be interesting to see how CELL+NVGPU will turn out to be.
 
so can somebody tell me, how does NV50x compare to the ATI tech Msoft are using in xenon?
 
hey69 said:
so can somebody tell me, how does NV50x compare to the ATI tech Msoft are using in xenon?

Would be nice to know. Not sure anyone knows anything though.

Oh, i think you need to fix that link in your sig. :D
 
As it seems Nvidia are trying to get their next gen GPU in production this year, doesn't that mean that the ATI GPU for the Xbox2 are going to fairly evenly matched? So in terms of graphical output there can't be that much difference between each even if the Cell processor is say 3-4 times more powerful than that in the Xbox2? I don't know if these question are right but it seems to me that if your graphics card can only do X, whats the worth of having Y loads of CPU, won't that just be used for AI and phyics like calculations and if so surely the Xbox2 cpu will be powerful enough to preform similar functions to that of whatever the Cell processor is. Or is there a lot more to it?
 
PiNkY said:
BTW how are Nvidia's / ATI's / etc. approaches to designing GPUs x86 centric?

x86-centric is, IMHO, an aphorism of sorts for the principles behind the generalized PC architecture that has evolved due to the multi-vendor landscape and (in)ability to create usable and farsignted standard between each component.

I suppose a distinction can easily be drawn between the current PC architecture and programming model and that put forth in Keith Diefendorff and Pradeep Dubey's seminal paper on the necessary shift in processor design to conform to the emergence of dynamic workloads. This setiment was most recently echoed by Peter Hofstee during his presentation in San Fransisco and will likely be raised again at ISSCC.

Many will undoubtedly reply back that PCI-Express will allow for this and point towards X2 -- especially those who are working on it actively without similar knowledge of PS3 -- but this isn't analogous and X2's steps are small and evolutionary, which ERP once agreed to.

QRoach said:
'm kinda curious to see what vince is going to say regarding this info.

Concerning my comment on this Qroach, while I'd love to give you and your twin Johnny more comments to take out of context and use as a sig; I'll wait for tangible information before jumping on the bandwagon that many here so love to do: Cell != PS3, Cell != 65nm, Cell != 2005, but 2007, Cell =! 4GHz, et al. I like Joe, but he jumped the gun (maybe pulled the trigger ;)) on his comments quite a bit if you listen to the actual call. Besides, I'm sitting pretty so far in the grand scheme of things, there's no need to comment.

What is most interesting to me is how Jen-Hsun commented again that this will be used in all Sony CE devices, which is what Cell is intended and was designed for -- We have independent confirmation from both Sony group, IBM and Toshiba on this. So, how would one make these statements logically compatable?

PS. While I'm without a doubt an ass (no pun intended), can we ditch the link to Deadmeat's picture? Thanks.
 
Vince said:
x86-centric is, IMHO, an aphorism of sorts for the principles behind the generalized PC architecture that has evolved due to the multi-vendor landscape and (in)ability to create usable and farsignted standards between each component.

Funny that these unusable and short sighted standards have resulted in the most profilic form of (high performance) computing device on the planet with little sign of abatement. I guess all the industry drivers must be smacking their foreheads at their stupidity (but at least they have bundles of cash to fall back on, eh?).

:?
 
DaveBaumann said:
Funny that these unusable and short sighted standards have resulted in the most profilic form of (high performance) computing device on the planet with little sign of abatement. I guess all the industry drivers must be smacking their foreheads at their stupidity (but at least they have bundles of cash to fall back on, eh?).

Dave, before commenting in typical knee-jerk fashion to whatever I say, take a few seconds and mull over what I posted:

  • A closed-set/design will, almost intrinsically, surpass the open PC paradigm given invarient computation components and a variable integration and communications model.
The "most prolific" comment is a Red Herring as you attempt to disguise the actual topic concerning relative preformance with sales and prominence. Which, in fact, is an utterly pathetic comment as it reinforces what I previously stated if you logically think about it and what it entails. Of course it'll be the most prolific form of computation on the planet because it's an open model, which allows for the free-market to partake in the system and sets the foundation for the OEM PC buisness.

Yet, in accordance with what I stated in my post, systems that utilize this model will lose out to custom systems that are closed-sets in relative preformance: Witness the Top-10 super computers and the dominence of closed sets such as IBM's BlueGene, SGI and NEC's vector systems.

A prime example is the XBox which used off-the-shelf components (relatively speaking) and due to it being a closed-set, allowed for superior integration and all that is derived from this.

So, next time, try to respond to what was posted. I'd appreciate it.
 
When Diefendorff and Dubey's paper was written, the desktop GPU revolution hadn't started.

They picked one route to multi-media integration, the world picked another. Even Sony agree, a specialised vector processor (a GPU) will be 'standard' on all media centric devices for the foreseeable future.
 
DeanoC said:
When Diefendorff and Dubey's paper was written, the desktop GPU revolution hadn't started.

They picked one route to multi-media integration, the world picked another. Even Sony agree, a specialised vector processor (a GPU) will be 'standard' on all media centric devices for the foreseeable future.

I think the key problem that you've overlooked is the phrase "the desktop."

Let me ask you think, if the PC landscape wasn't populated by multiple vendors that compete in graphics and image processing and several others who compete in CPU design and yet others who design interconnections and the storage hierarchies and even more who deal with all the components that go into a contemporary PC -- but, instead you had, say, three monolithic companies that produced the entire system and competed against eachother for sales in the holistic view of a PC, would the industry have went on the same road that it has?

The current paradigm is fabulous for the PC, no question about it. It's clearly hovering around the point of equilibrium for greatest net benefits per investment, I'm not saying it's not. The current affordable PC wouldn't exist without it. But, that doesn't mean for a second that this equilibrium point is in the region of greatest relative preformance.

That's an assumption that you make in your argument, tacitly or not, and one which I clearly and emphatically refute.
 
Vince said:
The current paradigm is fabulous for the PC, no question about it.

Vince said:
A prime example is the XBox which used off-the-shelf components (relatively speaking) and due to it being a closed-set, allowed for superior integration and all that is derived from this.

You appear to agree that that same paradigm for PC hardware design can be applied elsewhere and work equally as well.
 
DaveBaumann said:
Vince said:
The current paradigm is fabulous for the PC, no question about it.

Vince said:
A prime example is the XBox which used off-the-shelf components (relatively speaking) and due to it being a closed-set, allowed for superior integration and all that is derived from this.

You appear to agree that that same paradigm for PC hardware design can be applied elsewhere and work equally as well.

Yeah, it worked so well in the console market that it only took a few billion dollars in losses to keep it afloat in head to head competition with older but dedicated hardware.

Three cheers for the mighty x86 pc hardware makers!
 
DaveBaumann said:
Vince said:
The current paradigm is fabulous for the PC, no question about it.

Vince said:
A prime example is the XBox which used off-the-shelf components (relatively speaking) and due to it being a closed-set, allowed for superior integration and all that is derived from this.

You appear to agree that that same paradigm for PC hardware design can be applied elsewhere and work equally as well.

You see Dave, this is what happens when you selectively quotes me as if you just want to diuspute something. If you'd read my entire post as is intended, you'd see that I'm quite clearly stating that the current PC paradigm is fabulous for the niche it fills, namely I highlighted the cost to consumers and achieving good equilibrium in the region for greatest net benefit per investment.

I then went on to state that if the world didn't see the emergence of the PC paradigm, which seems inevitable with free market dynamics and competition driving price reduction, but instead we were concerned with sheerly preformance, that there is another region on the hypothetical landscape that will yeild higher preformance. The XBox shows this principle to be true by following the conditions I layed out before and will list below. It also refutes your dumb post from earlier defending the current model on the grounds that it's "prolific" (totally unrelated to the argument at hand), by showing that the same components when taken out of the competitive PC paradigm will outpreform it. As I already stated:

  • A closed-set/design will, almost intrinsically, surpass the open PC paradigm given invarient computation components and a variable integration and communications model.
This argument can then be extended, quite easily, to the entire system: the components within it and the processing model when you close the set and make it invarient -- which is what Diefendorff and Dubey did in their paper. This is the guiding principle that I believe in and, appearently, so does STI. Understand? Or are you just going to chop it up and keep going in circles.

EDIT: Formatting.
 
Status
Not open for further replies.
Back
Top