nVidia building the PS3 GPU in its "entirety"

Status
Not open for further replies.
Regarding the PC architecture and something alternative, along with the KK comment in the press release I quoted above (he says the current PC architecture nears its limit), I'd like to quote this speech about Cell by Peter Hofstee in which he talks about 'System Trends toward integration' (around 3:10). Probably it can differentiate the current PC architecture from something beyond.
 
DaveBaumann said:
Vince said:
Dave, ask a console developer here if they have greater flexibility in using resource sharing and balancing processing between components on the PC or on a console.

Developers are only just coming to grips with the ideas of what they can do with the point to point serial nature of PCI Express, so they are only just beginning to explore the possabilities. Curiously its the IHV's that have taken the bigger step by producing hardware that now renders directly to/from system RAM as the performance is good enough to warrent it.

This is exactly my point. Developers are "only just coming to grips" with things that console developers have been doing for some time concerning processing balancing and access to system RAM. As a closed set, there are intrinsic benefits. Why is this so hard to agree with?

DaveBaumann said:
IMO, Funadamentally we're not really talking about issues with the approach, but the plain old bog standard difference between Consoles and PC's - PC's have to go for the technologies that are realistically affordable for the market at the time of introduction; consoles can use more exotic elements at the time of introduction becuase they know they are playing a longer game with the hardware and costs will drop. PC's will inevitably catch-up and exceed consoles during their lifetime as the costs for the technologies come into line with being feasible.

I fully agree that they are overlapping concepts, they are distinct but fundimentally have analogous causation, which is why I can't for the life of me understand why you're arguing. It's like you're doing it just to say something.

DaveBaumann said:
Vince said:
Cell isn't built around the PUs Dave... at it's heart it's a large concurrent SIMD Vector processor in the same vein as a unified shading architecture, but a hell of alot faster in clock and more flexible.

And its still has no specialised hardware for removing texture latencies and are the instruction set tunied to pixel shading type functionality?

It doesn't? What's an SPU (not an APU) Dave, outside of an acronym we toss around and you ignore? Seriously, lets end this stupid thing once and for all... just have a go at it man.

And then you can answer the question of what's the difference between an S|APU SIMD pathway and that in an ALU of contemporary VShaders; which, as I stated Here, I'd most anticpate a unified shading complex to have commonality with. But correct me if I'm wrong.
 
Pinky, your comment articulates exactly what I'm talking about:

PiNkY said:
What makes sense from a performance/cost standpoint is adopted. That is what Dave already pointed out ("And, no, its not "finally" PCI Express, its "currently" PCI Express."). There is currently little need for a (imaginary) 100Gbps bus architecture.

Actually, I stated the preformance/cost considerations verse sheer preformance maximization Here & Here. This is exactly my point, ask yourself why isn't there a need for a 100Gbps bus on the PC. Why isn't the PC designed holistically with more attention payed towards the data flow and computational fluidity; Why does the interconnection/communications lag behind as had been the case with utilization of AGP (vis-a-vis consoles) and will likely be true on PCI-E as well.

Could it be intrinsic to the PC paradigm of multiple vendors in competition horizontally all concerned with tuning their one part based on a cost/preformance equilibrium (and not looking vertically) instead of a more vertical company which is looking at the system holistically. Could it be that there are global regions which yeild higher relative preformance than on the open-set PC, and could it be that a closed-set/system can reach these points?

Maybe that coulkd be why X2 is designed why it is? ;) I'm not talking just about Cell, it applies to X2 as well concering these aspects; I just feel they were stressed more prominently in STI's architecture than MS's.

Megadrive1988, I usually only acknowledge that which I disagree with. It doesn't mean I don't read it.

EDIT: Links sucked.
 
Vince said:
DaveBaumann said:
Its not dumb Vince, your statement that the PC vendors have an "(in)ability to create usable and farsignted standard between each component" is clearly incorrect as you have proved yourself by the very fact that it can be applied beyond the scope of the PC environment and used to a full context.

No, you just don't understand what's being stated. Very simple, and you're going to do all the thinking:

  • I have GPU Gn, CPU Cn, Storage Sn, which of the following would more closely approximate the eq. point of greastest preformance:
    [list:edf7651e8e]
  • Closed System A: [GPU G1, CPU C1, Storage S1] with custom interconnections and system optimization; or:
  • Open System B: Random combination of [GPU Gn, CPU Cn, Storage Sn] |n=1...1E5
[/list:u:edf7651e8e]
The PC will allways lose out to a closed-set enviroment, especially in the PC paradigm we have in which there is little thinking of the system on a holistic level. Each vender is concerned primarily with their given component and there is little cooperative work. Hell, look at the history of PCI, AGP and - finally - PCI-Express. AGP texturing anyone?

DaveBaumann said:
The curious element about your last point is that the Cell paradigm is that it is designed to be used in a multitude of devices so the hardware contructs it employs are going to have to be less specific by design, in some areas, than the pure device its going to be utilised in.

I question the convention wisdom that applied, and both of us talked about, back in the 1999 period when DX7 was around with DX8 on the horizon and all the buzz was about the move to more programmability and the questions of fixed-functionality verse programmability and what preformance delta there would be. I feel that we were correct for out time, but the paradigm (I need a new word... situation) has changed with the influx of logic that's been happening and will only accelerate in the next two years tremedously.

When we were talking about a 20M-odd transistor NV15, the balance is alot finer than it is when we're talking about a Broadband Engine or R500 that could be approaching a Billion transistors. We've, IMHO, reached a point where the bounds are on sheer computation and bandwith in dynamic applications. It's possible to design around a modular architecture that's focused on these types of applications which can be scaled down to the low-end apps which have a low resource budget, while retaining cost effeciency due to process advances.

For example, as the ATI guys have told me and you've mentioned on the site from time to time, general computation is moving to the GPU. Not to mention names, but somone here and I were discussing sequencing on them. A GPU is, I'm betting you're going to say, highly tuned to graphic applications but it can still run anything. With the move to a unified shader (I don't know but would guess the ALU they'd use is more akin to a current VS [over current PS constructs, which is why I stated this] which is akin to an APU) this just becomes more and more feasible.

Vince I agree with almost everything you've said.

the Broadband Engine will, I think, be well north of 500M transistors, maybe even approaching 1 billion.

but I am less certain about the R500.

the current R420/R480 is only ~160 million transistors. the R520 PC VPU is rumored to be ~300M and I 'm certain the Xenon's R500 will be more than 300M ....but... approaching 1 billion? I'd be surprised. I was thinking more around ~500M. It's got some eDRAM but not as much as Broadband Engine is expected to have. I would like to believe R500 has near 1 billion. could you show me how you figure it might?
 
Vince said:
...
And then you can answer the question of what's the difference between an S|APU SIMD pathway and that in an ALU of contemporary VShaders; which, as I stated Here, I'd most anticpate a unified shading complex to have commonality with. But correct me if I'm wrong.

Is this the question that you guys are trying to resolve with pages of posts? Just curious...I'd really like to know because my eyes have strated to bleed from the insisde...
 
I fully agree that they are overlapping concepts, they are distinct but fundimentally have analogous causation, which is why I can't for the life of me understand why you're arguing. It's like you're doing it just to say something.

I take issue with your assertion that they have an "(in)ability to create usable and farsignted standard between each component" Vince, or the implication that the PC is somehow fundamentally EOL, or near, because of it; the concepts that have been put in place so far have been fairly forward things and when they have reached the end of their useful lifetime, the industry PC has the capability to evolve on to newer platforms that better suites its needs at the time. It has been that way for the past 20 years and if it wasn't then there is no way its could have survived that long and they know that if they don't in the future that they won't have an industry to feed on.

You should note that its the entire console (as well as, rapidly becoming, handheld and consumer display device) instrusty that, in the absense of SGI or sufficient inhouse capabilities, that are now reliant on PC IHV's in order to provide their graphics processing capabilities, not the other way around. You have to wonder why this this is the case given the consertavite, shortsighted, and backard environment they operate in! ;)

The PC industry has given them the freedom to innvovate, something that wouldn't be possible along in the consumer industry, but now they have proven technologies they are able to sell them back into the consumer space. If it wasn't for the depth of strength, innvovation that the collective groups of companies contribute and the ability to change and retarget difference uses then you may end up with looking at much worse graphics on your TV screen next year.
 
DaveBaumann said:
I take issue with your assertion that they have an "(in)ability to create usable and farsignted standard between each component" Vince, or the implication that the PC is somehow fundamentally EOL, or near, because of it

I didn't say it's EOL, you're not out of a job, atleast not yet atleast (;)); which is why I like it when you respond to what I've written. I think you're taking issue with something which is a non-issue, I think we'd both agree that each type of platform has specific benefits: which I've stated several for each as have you.

My intentions were to state that a closed-set console has an intrinsic advantage (component invarient) as it's designed holistically, something which is manifested as a particular weakness in the otherwise strong PC model due to the actual market dynamics which are more fragmented between vendors in both axis. I don't know how this didn't get across to you, but I think it's because you're English and I'm American -- so we don't share a common language.

DaveBaumann said:
You should note that its the entire console (as well as, rapidly becoming, handheld and consumer display device) instrusty that, in the absense of SGI or sufficient inhouse capabilities, that are now reliant on PC IHV's in order to provide their graphics processing capabilities, not the other way around. You have to wonder why this this is the case given the consertavite, shortsighted, and backard environment they operate in! ;)

Haha. Save this and we'll revisit it in 10 years, will be interesting to see what the industry is like then.
 
slightly off-topic. I am waiting to see an announcement that Sony, or perhaps Nvidia or ATI, has acquired that relatively small research company or group (edit: SAARC ?) that has been working on real-time raytracing architectures a using very very small amount of transistors running at very low frequency. edit: the AR350 chip 8)

If that happens, that would probably signal that a future architecture (i.e. PS4, NVxx, Rxxx) would be moving toward realtime raytracing, aside from their own R&D on raytracing.

I'd expect this more of Sony or ATI rather than Nvidia since Nvidia's David Kirk seemed to downplay the significance of this small company's work on raytracing in a debate... oh yeah, Prof. Slusallek
http://www.beyond3d.com/forum/viewtopic.php?t=12881&highlight=raytracing+kirk


*imagines a Cell-2 powered PlayStation4 with a fully realized AR350 or saarCOR based raytracing graphics engine on a massively, massively larger scale than those tiny prototype chips.* ....please don't wake me from this dream unless it happens.... ;)
 
Vince: Under Seige - the movie. :)

Anyway, Sony promised to break Moore's Law with PS3. If they don't come through, then PS3 will be just another nice piece of hardware. Can't blame them for trying though...
 
Vince said:
My intentions were to state that a closed-set console has an intrinsic advantage (component invarient) as it's designed holistically, something which is manifested as a particular weakness in the otherwise strong PC model due to the actual market dynamics which are more fragmented between vendors in both axis.

But this isn't down to some fundamental differences in computing directions or issues with the PC industry, its just the plain, age-old difference between consoles and PC's. Consoles shoot for the stars now because they have to last 4-5 years and they'll make back what they loose on hardware cost reductions and software licenses, PC's implement the technologies when it become feasibles to do so in a cost effective manner.

However, even the timescales at which consoles are exceeded by PC's capabilities these days are are lessening, even more so now the same graphics vendors are being used.
 
still, if PS3 has even a reasonable fraction of a tflop of compute performance I don't see PCs coming close for a long time, as far as Intel, AMD CPUs. when (if) PCs do come close to PS3's compute power, the PS4 will be near, or launching, with many tflops of compute performance.

as for graphics, things on the PS3 side might be a bit closer to Xenon and PCs, as far as hardwired features. but even though PS3 is going to have NV5X based graphics, does not mean PS3 cannot totally eclipse Xenon and PC graphics for a long long time. the 65nm process that Sony, Toshiba and IBM will have long before TSMC has it, could provide PS3 GPU will a large or even massive advantage. 65nm means alot more transistors than 90nm and higher clock speeds. PS3 could have a GPU that is 4-8 times the performance of GeForce 6800 Ultra. not to mention that PS3 GPU will be fully utilized over its 5-7 year life, unlike any given PC GPU which is never fully targeted because of nature of PC game development.

Imagine if you take the best that nVIDIA has to offer and take that out of a restricted platform like the PC and put it into a streamlined, massively parallel, nearly bottle-neck free platform like PS3 or an SGI visualization system. Imagine if you take the best that Nvidia has and combine that with the best aspects of Sony's Graphics Synthesizer chip and GS family.
(including GS I-32, the never-seen GS2, and the GS3 that was probably developed for PS3)... so you combined the best of Nvidia (rasterization, shaders, harewired features, image quality, etc) with the best of Graphics Synthesizer (eDRAM, parallism, high bandwidth) and we could be seeing the birth of something that PS2 almost was in 2000.

of course I could be wrong and PS3 is merely a powerful CPU plus a slightly customized PC-based GPU like NV2A. we'll see! 8)
 
DaveBaumann said:
But this isn't down to some fundamental differences in computing directions or issues with the PC industry, its just the plain, age-old difference between consoles and PC's.

Well, according to what you've stated it can be a fundimental difference in computing directions. I do believe you previously stated:

  • Vince said:
    This argument can then be extended, quite easily, to the entire system: the components within it and the processing model when you close the set and make it invarient -- which is what Diefendorff and Dubey did in their paper. This is the guiding principle that I believe in and, appearently, so does STI.

    Dave said:
    I don't take issue with the idea can equally be applied to hardware as well, but I just don't agree with some of the things you speak about
But it's not an "issue" with the PC industry anymore than the 4-5 year gap between console revisions is an "issue" as you stated. They are both sources of their respective strengths and their respective weaknesses. You guys really need to stop thinking I'm calling for the death of the PC in every post I make. When I do, I'll be sure to label the post as such.

I do question your last comment though, lets wait 3 months and revisit it. Graphic vendors control the 3D specific IP, which is why they're sought, but they don't have the bleeding-edge process technology and ability to manufacture the parts which is what will ultimately bound preformance in todays world which is increasingly computation and bandwith bound.
 
Megadrive1988 said:
still, if PS3 has even a reasonable fraction of a tflop of compute performance I don't see PCs coming close for a long time, as far as Intel, AMD CPUs. when (if) PC do come close to PS3's compute power, the PS4 will be near, or launching, with many tflops of compute performance.
Except a PC today already have ~100GFLOPs (24 (16+6) pipes, 8FLOPS at 500Mhz = 96 GFlops). Thats not even counting the CPU or all the GPU ops (that assumes only 1 4D Dot per cycle).

If GPU keep doubling ever 6 months, they will pass a TFLOP in 2 years...

A PC today is a CPU + GPU. The FLOPs come from the GPU on a PC not the CPU.
 
I'm still not so sure how Sony will get a CPU that's so much more powerful than the competition, without making it extremely expensive.
I'm hopeful and all, but in the end, unless this is a miracle machine, everything comes at a cost.
The BE might be the fastest processor at doing some things, with very large shortcomings somewhere else.
It's always like a balancing scale, put something on one side and the other will be inversely affected, and my view is that ther is only so much cash they can pump into a BE, put it on one side of the scale and the other will be inversely affected.

Obviously Sony will target specific tasks they think will be needed and ignore tasks they feel will never be used in a PS3 game.
 
DeanoC said:
Megadrive1988 said:
still, if PS3 has even a reasonable fraction of a tflop of compute performance I don't see PCs coming close for a long time, as far as Intel, AMD CPUs. when (if) PC do come close to PS3's compute power, the PS4 will be near, or launching, with many tflops of compute performance.
Except a PC today already have ~100GFLOPs (24 (16+6) pipes, 8FLOPS at 500Mhz = 96 GFlops). Thats not even counting the CPU or all the GPU ops (that assumes only 1 4D Dot per cycle).

If GPU keep doubling ever 6 months, they will pass a TFLOP in 2 years...

A PC today is a CPU + GPU. The FLOPs come from the GPU on a PC not the CPU.

How much does it cost?
 
The PC industry has given them the freedom to innvovate, something that wouldn't be possible along in the consumer industry

What kind of Pravda is this ? Are you taking Vince's comments and out of spite inverting them to make the other side shine ?

The PC sector with its focus on backward-compatibility with previous architectures and the other components which the GPU interacts with, with a dominant API that proceeds out of the complete control of the hardware vendors (MS seems to switch, every once in a while, who its favorite IHV is and this is seen in the DirectX evolution and which IHV influenced it and the other adapting and trying to beat the competition at their own game)...

Also, in PCs high-end parts which is where you expect performance and new features to appear ship in very limited quanities and developers, who cater to the lowest common denominator, will not take full advantage of what these architecture expose for a good while (in most cases).

In console chipsets, especially ones with quite guarantee of high volumes of production, like PlayStation 3, Xbox 2 and GCN 2/Revolution, manufacturers can afford more risks engineering their hardware.

PlayStation 3 and Xbox 2 will in fact receive technology which is the bleeding-edge (and it will be further customized) of what nVIDIA and ATI are cooking-up in their own R&D labs and all developers for each platform will have (we can hope, being a closed environment) those features and performance characteristics documented and exposed which is the first step towards taking advantage of them.
 
DeanoC said:
Megadrive1988 said:
still, if PS3 has even a reasonable fraction of a tflop of compute performance I don't see PCs coming close for a long time, as far as Intel, AMD CPUs. when (if) PC do come close to PS3's compute power, the PS4 will be near, or launching, with many tflops of compute performance.
Except a PC today already have ~100GFLOPs (24 (16+6) pipes, 8FLOPS at 500Mhz = 96 GFlops). Thats not even counting the CPU or all the GPU ops (that assumes only 1 4D Dot per cycle).

If GPU keep doubling ever 6 months, they will pass a TFLOP in 2 years...

A PC today is a CPU + GPU. The FLOPs come from the GPU on a PC not the CPU.


yeah, the GPUs on the PC side are rapidly increasing in their computational performance. but the PC GPUs are not completely general purpose enough for anything but graphics applications, yet. Whereas the CELL CPU in PS3 should be better for general purpose usage than GPUs in PCs are today. The CPUs on the PC side are pretty stagnate. they're not growing in performance. even less growth the last few years than they used to in the 1990s. at least until we get dual-core CPUs from Intel and AMD on the desktop. but even then, dual-core CPUs will not rival highend implementations of CELL like that which will be used in PS3 and other highend computing platforms. not counting the lower-end performance of CELL based DVD players, televisions, PDAs etc.

now back to the PC graphics performance increases: yes they are happening. but highend PC GPUs are hardly being used because ~90% or more of all PCs in the USA & the world are using lowend to midrange GPUs that have been released over the last 5-6 years. GeForce2 MX or GeForce 4 MX or GeForceFX 5200 anyone? very few people will have the newest highest-end GPUs from Nvidia or ATI. whereas, everyone who has a PS3 (or Xenon or Revolution) will have a modern highperformance 3D graphics chip, and in the case of PS3, a very very highperformance CPU. the PS3 will raise the minimum bar, i mean the lowest commen denominator, of 3D graphics in homes. like PS2 did, but probably to a somewhat greater extent, specially because of that CPU.

in this coming cycle, process technology and the massive amount of R&D behind PS3 is probably going to show us a more pronounced difference between PC and console.
 
Megadrive1988 said:
this is probably true. the GPUs on the PC side are rapidly increasing in their computational performance. but this is not completely general purpose enough for anything but graphics applications. and the CPUs on the PC side are pretty stagnate. at least until we get dual-core CPUs from Intel and AMD on the desktop. but even then, dual-core CPUs will not rival highend implementations of CELL like that which will be used in PS3 and other highend computing platforms, not counting the lower-end performance for CELL based DVD players, televisions, PDAs etc. now back to the PC graphics performance increases: yes they are happening. but highend PC GPUs are hardly being used because 90~98 % of all PCs in the USA/the world are using lowend to midrange GPUs that have been released over the last 5-6 years. very few people will have the newest highest end GPUs from Nvidia or ATI. whereas everyone who has a Playstation3 (or Xenon or Revolution) will have a modern highperformance 3D graphics chip. the PS3 will raise the minimum bar, i mean the lowest commen denominator of 3D in homes. like PS2 did, but probably to a somewhat greater extent.

Yes exactly, just like the PC business. A range of FLOPs targets based on whats it used for.

Of course 3D gaming needs lots of flops whereas watching a DVD doesn't. Same for PC, same for Cell.

What exactly are we argueing here? Most PC aren't used for games, those that are damn powerful. Who really plays PC games with less than a GFFX today?

When hasn't PC gaming pushed the envelope (as anybody who has Doom3 or HL2 knows, you need a good PC).
Of course the luxery of console refresh is great as it offers a subsized jump in tech, but after a little while the PC market equalises and then beats til the next console appears.

Repeat and rinse. Thats the point Cell/PS3 is 'just' another console, not different PS2 and PS1 before. Better than most PCs when released.

Thats great as a developer and a consumer, just not very revolutionary.

I think we been round the houses with this discussion, best we agree to disagree and see if the PC gaming business is here in 5 years or if we all have PS3 gaming systems :)
 
Status
Not open for further replies.
Back
Top