nVidia building the PS3 GPU in its "entirety"

Status
Not open for further replies.
It's interesting because Sony and Toshiba want to use Cell in CE applications to support their broadband distibution of digital media. Cell, from the patents, was suited towards this movement of data across an IPv6 network fabric; their recent ISSCC press releases mentioned it's use in CE again. What good would an NV50 derivative do in Sony's CE equiptment?

I think you stated:
Joe said:
nVidia is designing the GPU like they would any other PC chip.
 
DeanoC said:
I suppose I should mention that I quite like Cell/PS3 its just I don't get the argument that the future is Cell. That its inheritantly better than the other architectures. Its got some cool ideas, that look fairly good for a games console, but as a Sony employee told me "Its just a good CPU".

To me its just a high performance float version of ARM. Fast, cheap and cool for techies but not really making much of a difference to the rest of the world outside embedded systems..

I would like to see it on PCs... I do think that your Pentium 4's, your Atholn 64's are pushing performance where most PC users (sorry, I should specify... users who use 3D applicatios and multi-media software on their PCs extensively... and these seems to be the driving force behind a lot of the money spent also in the PC sector as nVIDIA's CEO pointed out in the conference call) do not need and not delivering on what they do need: I prefer an approach like the one CELL and Xbox 2/Xenon's CPU are taking. Less optimized single thread/scalar performance and high-speed multi-thread/multi-processing/vector and multi-media processing performance.

Yes, this would be a "gaming/multi-media" kind of PCs, but I ahve no problem seeing future PCs differing greatly from corporate PCs to Server-oriented PCs to Gaming-oriented PCs.

I would like to see IA-64 and parallel processing/multi-cores approaches to replace x86 in the consumer-space as soon as possible as well as I see it as a much better step forward compared to x86's evolution.
 
Tuttle said:
Future is shaders? I don't think so. PC shaders are nothing more than an artifact of the hardware layout of desktop computers.

Shader are just code.

The only difference between a shader and 'normal' code is thats is designed to be executed by N (where N is 1 to infinity) processors simultanously. Shaders don't even have to run on GPU, the run well on at least 2 CPUs I know off (one being the Emotion Engine).

Stop looking at PC vs Cell and look at the parellel problem. Suddenly you see two slightly different approaches to the same problem, both seem quite valid to me, both have advantages and disadvantages. I don't see either having a killer blow to the other...
 
<Tip toes into thread>

Guys WTF are you talking about for the last 5 pages?

It seems we get a Dave and Vince show every other day now and I dunno if anyone else feels the same but I have no clue what your discussing? It seems to be a round robin game of semantics and circular BS!

Can someone please state the question clearly and concisley that is being discussed without fuffy words and verbal diahroea?

/Sorry coffee not working.

<Tip toes out of thread>
 
Jaws said:
<Tip toes into thread>

Guys WTF are you talking about for the last 5 pages?

It seems we get a Dave and Vince show every other day now and I dunno if anyone else feels the same but I have no clue what your discussing? It seems to be a round robin game of semantics and circular BS!

Can someone please state the question clearly and concisley that is being discussed without fuffy words and verbal diahroea?

<Tip toes out of thread>

There is a God then!!!!
 
I don't get it. If nVidia's effort is based on the NV50 architecture, why'd they announce they're dropping the NV50?
 
Vince said:
It's interesting because Sony and Toshiba want to use Cell in CE applications to support their broadband distibution of digital media. Cell, from the patents, was suited towards this movement of data across an IPv6 network fabric; their recent ISSCC press releases mentioned it's use in CE again. What good would an NV50 derivative do in Sony's CE equiptment?

The same thing as Toshiba's MeP would do for Toshiba's CE devices in which they plan to pair CELL with their own scalable MeP architecture.

I do not think NV50 will be so limited in its scalability as you might think: nVIDIA has had the scalability issue in mind when they started NV50 development for the first time IMHO.
 
Shifty Geezer said:
I don't get it. If nVidia's effort is based on the NV50 architecture, why'd they announce they're dropping the NV50?

And the wheel goes round and round and round....

(I asked the same thing on the 2nd page, and in another 3 threads...)
 
Shifty Geezer said:
I don't get it. If nVidia's effort is based on the NV50 architecture, why'd they announce they're dropping the NV50?

1.) Show the official announcement.

2.) Think about Xbox 2 and their use of a R400 derivative which will probably appear on the PC space as R600 later on and that was substituted in the PC space by the more conservative R420 (which was based on the R3XX architecture).
 
Panajev2001a said:
The same thing as Toshiba's MeP would do for Toshiba's CE devices in which they plan to pair CELL with their won scalable MeP architecture.

I do not think NV50 will be so limited in its scalability as you might think: nVIDIA has had the scalability issue in mind when they started NV50 development for the first time IMHO.

Maybe, but the differential in complexity between what I'd assume to be NV50 and MeP is just a bit. heh.

Shifty Geezer, London-boy -- if Dave says it's cancelled I'll believe it, untill then the the Inquirer is still a piece of shit, B3D want-to-be.
 
Vince said:
It's interesting because Sony and Toshiba want to use Cell in CE applications to support their broadband distibution of digital media. Cell, from the patents, was suited towards this movement of data across an IPv6 network fabric; their recent ISSCC press releases mentioned it's use in CE again. What good would an NV50 derivative do in Sony's CE equiptment?

I think you stated:
Joe said:
nVidia is designing the GPU like they would any other PC chip.

Yes, I did. I would revise that to say "nvdia is desiging the PS3 GPU like they would any other GPU, including their PC based ones."

What good would an nV50 derivative do in Sony's CE equipment? That all depends on what NV50 is designed to do (how flexible, etc.), doesn't it?
 
The small 50 engineers mean either,
Nvidia NV-Next architecture is set so they can easily custom slap NVPS onto with Cell CPU or
Sony architecture is set, share same philosophy as Nvidia, so only small number is needed to help professionally optimise the Cell VS?
Is this what you think panajeva?
 
Vince said:
Dave, ask a console developer here if they have greater flexibility in using resource sharing and balancing processing between components on the PC or on a console.

Developers are only just coming to grips with the ideas of what they can do with the point to point serial nature of PCI Express, so they are only just beginning to explore the possabilities. Curiously its the IHV's that have taken the bigger step by producing hardware that now renders directly to/from system RAM as the performance is good enough to warrent it.

Dave, any open-system will need standards; and standards that are mediated by committee will be slower that what a single vertical organization can do. For example, if Sony wants XDR or RaZor, it gets it. AGP was launched in what, 1998... so it's been 6 years or there-abouts?

And standards can also shift within without large platform changes - in the 6 years of AGP we've had 4 revisions; PCI Express is already scheduled to double bandwidth within a few years, and thats per lane. I don't see that the open nature of of the workgroups has inhibitied the introduction of PCI Express; if so, how is this behind anything else that is realistially affordable for the market now?

IMO, Funadamentally we're not really talking about issues with the approach, but the plain old bog standard difference between Consoles and PC's - PC's have to go for the technologies that are realistically affordable for the market at the time of introduction; consoles can use more exotic elements at the time of introduction becuase they know they are playing a longer game with the hardware and costs will drop. PC's will inevitably catch-up and exceed consoles during their lifetime as the costs for the technologies come into line with being feasible.

Cell isn't built around the PUs Dave... at it's heart it's a large concurrent SIMD Vector processor in the same vein as a unified shading architecture, but a hell of alot faster in clock and more flexible.

And its still has no specialised hardware for removing texture latencies and are the instruction set tunied to pixel shading type functionality?
 
this thread has taken a different turn from when I last looked at it, which was around 3 or 4 am central time.

basicly it's (this thread is now about) the high performance custom closed architecture not bound by PC restraints with massive performance that PC should not be able to attain at any given timeframe - VS the PC architecture that cators to compatibility and only high performance in certain pieces, that are unable to work together in absolute harmony without inefficiencies or major bottlenecks. PCs take baby steps every few months. high performance custom closed architectures like SGI visualization supercomputers, IBM super computers, NEC supercomputers and custom non-PC based consoles like Playstation to Playstation2 take massive leaps by introducing a new, fresh architecture at each generation shift. it is true that at the time of their debut and release, the PS1 (1993, 1994-1995) and PS2 (1999, 2000) were unmatched by the best PCs that could be put together with off the shelf components. pretty much the same with SGI visualization super computers (to a much larger degree) and other supercomputers.

although Vince might not acknowledge my post here, I do agree with alot of the things he says.

the Playstation3 and CELL were envisioned to finally smash the PC architecture and deem it irrelivant or dead. with a massive upping of transistors. totally fresh architecture. totally surpassing Intel (and AMD). at least publically that is what Sony was talking about in the early days of 1999 and 2000 with Playstation3 before Cell was announced in 2001. back then it was all about the Emotion Engine 3 and Graphics Synthesizer 3 with drastiically changed architectures Sony would break Moore's Law with a processor that had 500 million transistors. far more than what Intel would likely have for consumers in 2005-2006. at that time (1999-2000) the Pentium III was the highend in PC computing, the Athlon (K7) was getting into gear and the Pentium 4 was on the near-term horizon. PC graphics chips were advancing steadily, but still very much constrained by the worst aspects of the PC architecture (AGP, etc). the Graphics Synthesizer 3 would have a massively parallel design, like the highly parallel design of GS1 in PS2. Much like Intel (and AMD) would not be able to compete with EE3, I think Sony hoped that Nvidia, 3Dfx, ArtX (and others) would not be able to compete with GS3. the EE3+GS3 combined with a decent set of tools, a good API and competent OS would be able to obliterate Wintel in both the living room and on Wintels own turf which was always the desktop and more recently the workstation & server markets. as well as supercomputing. all kinds of CE and computing devices could be built using the new EE3 and GS3 architectures. much like the Cell architecture that came to light in 2001. (I personally believe that the drastically changed architectures of EE3 and GS3 where in fact the Cell based Broadband Engine CPU and Cell based Visualizer).

[post not completed yet, work in progress, kinda rambing as i collect my thoughts]

I happen to believe in the non-PC centric approach. when it's done right, it shows the weakness of PC architecture very well.

even well-put together systems that are not all that radical, and that only somewhat move away from the PC architecture, can show how bad the PC is. like Sega's Model 3 board of 1996. PowerPC CPU + 2 Lockheed Real3D GPUs. there is no way that a properly equiped PC of 1996 could compete with Model 3. you had Pentium I CPUs, Pentium Pro at best (I don't think PII was out then) and 3Dfx Voodoo graphics. Model 3 slaughtered such PC systems. the PC was hard pressed to rival the 2.5 almost 3 year old Model 2 board in late 1996.
 
Panajev2001a said:
Dave, look at the NV2A, look at the NV20... I think that some times there are clearly cases in which nVIDIA (or ATI) saw performance best attainable one way, saw the future being in a certain direction, requiring certain customizations... yet, let's say, Microsoft (being lobbied by ATI and other companies other than just nVIDIA) disagreed and pushed DirectX in another direction.

Under DirectX, nVIDIA is limited in what they can expose of their innovative ways and if OpenGL for gaming went *puff* and DirectX were to be the main and basically only commercially viable solution in the PC space then you would see IMHO a model that would have to follow Microsoft's pace and not the industry's pace.

Thats a fairly poor example really since capabilitiy wise (not performance, but features) NV25 and NV2A were very, very similar, if not identical, so thats hardly a case for DX stifling innovation at the hardware level, since it was designed in. Its presupposing that MS is "dicatating" the standard, which I'mn sure you know isn't the case - MS have to be driven by the where the IHV's see the capabilities in terms of feasible design implementation in the time frame, and the R&D that the IHV's undertake, alongside MS's own reasearch; all of this is also driven by the feedback MS are reciving from developers wants / needs directly and through the IHV's. MS's DX introductions are also usually time with simialar timescales that the IHV's are inputting to them for these types of features to be affordable - you can see that even though XBox 1, Gamecube and XBox 2 had graphics that were designed specifically for closed box systems the mainstay of the functionality doesn't significantly differ from the functionality on the PC at similar timescales because that was waht was sensible to implement; the areas of difference are a fairly small proprotion of the gates in comparison to the main featureset - I'm taking a stab to say that PS3 will now probably fall in a very similar line, especially if it is based on NV50.
 
The PC model tends also to prefer ease of implementation and ease of backward-compatibility to performance (compatibility in general takes a front-seat compared to performance) as indicated once again by the industry as they chose the slowest of the three standards proposed for PCI-Express 2.0 (5.0 Gbps).

What makes sense from a performance/cost standpoint is adopted. That is what Dave already pointed out ("And, no, its not "finally" PCI Express, its "currently" PCI Express."). There is currently little need for a (imaginary) 100Gbps bus architecture.

A console, something like PlayStation 3 or Xbox 2/Xenon, not being a Desktop PC and not having to be as compatible as another PC made by Dell or HP, but being its own closed platform can afford to do much more than simply mixing and matching off-the-shelf PC parts with no customization.

Look at the Xbox 2 architecture or a bit more in the future the PlayStation 3 architecture: they run custom software, developed for highly customized hardware (that does not need to employ the technologies which are available and are used in Desktop PCs) and customized APIs (DirectX for Xbox 1's inner-workings were completely re-written and optimizeed for the Xbox 1 architecture and specific customizations present in the NV2A were uncovered... customizations that DirectX on PC would have not uncovered).

So, since you'll now getl a PS3 with a GPU that is designed from the ground up to solve a spefic problem without any need to adhere to an all-purpose ISA such as Cell, you should be a happy camper *takes cover*

I really dislike this... bah it takes a general processor and slaps on top of them some vector units.

This is overly simplicistic and off-the-spot IMHO: hey, ya know what... there is not much difference between a BMW M5 and a Yugo... I mean one just has a fast engine slapped on, but both have a chassis, 4 wheels, suspensions and seats... the same car... Rolling Eyes

While for you current solutions might compare to cell like a yugo to a m5, others (me included) just do not see such a large gap. To me current PCs already sport general purpose CPUs for control and specialized ASICs for computing intensive tasks (think your Pentium/Athlon with graphics-, sound and whatever subsystems). Cell brings more integration, but looking at other embedded systems you'll see this has been the norm rather then exception.

I prefer an approach like the one CELL and Xbox 2/Xenon's CPU are taking. Less optimized single thread/scalar performance and high-speed multi-thread/multi-processing/vector and multi-media processing performance.

Take a look at major IHVs roadmaps. You'll probably have at least dual-core solutions in desktop PCs before the first next-gen console hits retail. SMP with special interconnection schemes has been a common sight in workstation/server markets for years.
 
PiNkY said:
The PC model tends also to prefer ease of implementation and ease of backward-compatibility to performance (compatibility in general takes a front-seat compared to performance) as indicated once again by the industry as they chose the slowest of the three standards proposed for PCI-Express 2.0 (5.0 Gbps).

What makes sense from a performance/cost standpoint is adopted. That is what Dave already pointed out ("And, no, its not "finally" PCI Express, its "currently" PCI Express."). There is currently little need for a (imaginary) 100Gbps bus architecture.

I find it quite ironic that "backwards compatibility" and "ease of implemententation" can be thrown up against PC development in light of the PCI Express transition occuring at the moment - on the graphics front there is neither hardware compatibility or an ease of implementation. And they did so to increase the bandwidth available by a factor of 4 fold.
 
btw, I am still hoping that Playstation 3, dispite it now having an Nvidia designed GPU with Sony's input, I am hoping that it is far LESS PC-like than Xbox1, Dreamcast, or even Gamecube. as well as the forthcoming Xenon. even though Xenon is a slight shift away from PC architecture that Xbox1 was, Xenon is still a less radical shift than PS3.

I hope STI-N, Sony-IBM-Toshiba and now Nvidia, are able to combine the very best aspecs of a clean-sheet non-PC architecture with Nvidia's strengths in rasterization & shaders.
 
The Xbox GPU is essentially a beefy version of the GeForce 3. Will the PS3 (or whatever it will be called) GPU be based on forthcoming desktop GPU architecture or will it be its own entity entirely?

David Roman: It is a custom version of our next generation GPU

Sounds like they actually are doing something for the sony folks i.e. making the chip, and it sounds like if we find out about the GPU in the PS3 we will know more about their next product...
 
Panajev and Vince, what do you guys think of my PS4 and beyond predictions? ...that I made back on page 4.



Vince, the way I see PS3 now is, the best hope for it, is that it is indeed a massively powerful super computer CPU architecture from STI, not bound by X86 legacy or even the limitations of MIPs, combine with an SGI-based graphics subsystem. nVIDIA of course being mainly ex-SGI people, as well as people from other competent graphics companies (3DLabs, E&S, Real3D etc).

Sony and Nvidia's vision of graphics probably dove-tail quite nicely.
 
Status
Not open for further replies.
Back
Top