New interview about Ps3 CM:Dirt

You misunderstand.

Extra work as in the devs have to tell their game to look in the other memory for it rather than just having the one area.

I wasn't really commenting on anything else.
 
But, hey, aren't PC's using two memory pools? No one here is talking about PC's being severely bottlenecked.

That's true. But, on PC, there are usually much larger pools. Even 512 MB total is quite a restriction. You can see this in Gears, IMO, with all the texture LOD levels to keep RAM usage down. I think it's been said that either the CPU or GPU in PS3 can't really use the other pool, and so the machine doesn't really have as large a RAM resource as 360. Whether there's some goofy workarounds for this, who knows. But even if there is, it's still at best 512 MB usable RAM.

The PC GPUs also quite frequently enjoy much more memory bandwidth while the big main system RAM pool is similar in speed (on the newer DDR2/3 boards) to what the PS3 has in both pools. Bandwidth and fill rate are what keep 1080p kinda out of reach for both machines.

If you look at how much bandwidth and fill rate both machines have, RAM amount, and the fact that they are being asked to render 3x the pixels of the previous generation of consoles (with better assets as well), you can probably infer that these newer consoles aren't going to do much more than they are right now, IMO. We've seen a few AAA titles on 360 and one can tell they really push the machine.
 
Last edited by a moderator:
The PC GPUs also quite frequently enjoy much more memory bandwidth while the big main system RAM pool is similar in speed (on the newer DDR2/3 boards) to what the PS3 has in both pools. Bandwidth and fill rate are what keep 1080p kinda out of reach for both machines.

DDR3 can only reach PS3 XDR speeds if you have dual-channel DDR3, at the highest speed available, on the newest mother boards.

Even then, the PCI-E channels on a PC aren't as fast as the PS3's internal bandwidth between RSX and XDR.

Not to mention the EIB ring speed for multicore scaling.

I suppose it wouldn't be possible to get 256bit levels of fill performance out of RSX if you split the frame buffer between XDR and GDDR3, but otherwise the PS3 looks very good in its bandwidth allocations.. doesn't it?
 
DDR3 can only reach PS3 XDR speeds if you have dual-channel DDR3, at the highest speed available, on the newest mother boards.

True, but even the fastest CPU's can't come close to utilising all that bandwidth unlike Cell so I don't see what the point of it is anyway. By far the fastest Intel CPU's today can only consume about 10.5GB/sec of bandwidth so thats all you should need in system memory I would have thought.

Even then, the PCI-E channels on a PC aren't as fast as the PS3's internal bandwidth between RSX and XDR.

But that doesn't matter in the slightest for this purpose because a good PC already has as much, or more memory than the PS3's GDDR3 and XDR combined connected directly to the GPU by a much faster bus.

Certainly if the PC were to use system memory for graphics data it would be slower than PS3 but it would only have to resort to that when its already using more data in its graphics memory than would be possible on the PS3. Thus it only gets slower once it gets into a realm were the PS3 couldn't operate at all. Thats obviously assuming a 512MB or greater GPU and discounting some new abilities of DX10 which I think do make use of the system RAM more often (but presumably not to a detrimental effect).

Not to mention the EIB ring speed for multicore scaling.

Thats a bit out of context from the rest of your post! Its also just a single aspect of CPU performance which on its own is a little meaningless. Interesting point you raise though, I wonder how the EIB in Cell compares to whatever say a Core2 Duo uses to communicate between cores?
 
Extra work as in the devs have to tell their game to look in the other memory for it rather than just having the one area.
When your designing your code around two mem pools you determine exactly how you're going to use them.. If you're pulling data from one or the other there is no extra work involved because you already know before hand where your data is (you put it there..)

That's true. But, on PC, there are usually much larger pools. Even 512 MB total is quite a restriction. You can see this in Gears, IMO, with all the texture LOD levels to keep RAM usage down.
No matter how much memory a platform would give us, we'd always want more.. therefore any ristrictions are self-imposed..

I think it's been said that either the CPU or GPU in PS3 can't really use the other pool, and so the machine doesn't really have as large a RAM resource as 360.
Whoever said that simply didn't have a clue what he was talking about..
Whether there's some goofy workarounds for this, who knows. But even if there is, it's still at best 512 MB usable RAM.
workarounds are perfectly viable and as a result mandatory if your expecting to get any kind of good utilisation out of the system..

The PC GPUs also quite frequently enjoy much more memory bandwidth while the big main system RAM pool is similar in speed (on the newer DDR2/3 boards) to what the PS3 has in both pools.
*Useable* PC memory bandwidth is only ever as fast as your interconnects, therefore your always limited by PCI-E/ bus speeds..

Bandwidth and fill rate are what keep 1080p kinda out of reach for both machines.
I wouldn't say it's out of reach, depends on what you as a developer are aiming for.. (see Lair)

If you look at how much bandwidth and fill rate both machines have, RAM amount, and the fact that they are being asked to render 3x the pixels of the previous generation of consoles (with better assets as well), you can probably infer that these newer consoles aren't going to do much more than they are right now, IMO. We've seen a few AAA titles on 360 and one can tell they really push the machine.
How can you tell by looking at a few screenshots when you clearly don't know whats going on under the hood..?
 
*Useable* PC memory bandwidth is only ever as fast as your interconnects, therefore your always limited by PCI-E/ bus speeds..

Sorry for being pedantic as im sure you already know this but thats now quite true.

Useable graphics memory bandwidth for the GPU is limited to the GPU's memory bus and usuable system memory bandwidth for the CPU is limited to the CPU's FSB and/or memory controller.

PCI-E bandwidth only serves as a limitation when the GPU tries to use system memory or the CPU tries to use graphics memory (which I don't think is even possible in a PC is it?).

To be honest, I think the speed of DDR3 at the moment is pretty ridiculous considerinf what CPU's are limited to using. I REALLY hope that Nehelam can address at least dual channel DDR3 1600Mhz with its IMC. That would at least bring it on par with Cell in terms of memory bandwidth.
 
Sorry for being pedantic as im sure you already know this but thats now quite true.

Useable graphics memory bandwidth for the GPU is limited to the GPU's memory bus and usuable system memory bandwidth for the CPU is limited to the CPU's FSB and/or memory controller.
Ah sorry! I don't know what my mind was on when i said that!?! :p

PCI-E bandwidth only serves as a limitation when the GPU tries to use system memory or the CPU tries to use graphics memory (which I don't think is even possible in a PC is it?).
It is as far as i'm aware.. Don't ask me how you do it though.. Never really looked too far into it..

To be honest, I think the speed of DDR3 at the moment is pretty ridiculous considerinf what CPU's are limited to using. I REALLY hope that Nehelam can address at least dual channel DDR3 1600Mhz with its IMC. That would at least bring it on par with Cell in terms of memory bandwidth.
True. however most PC games are generally using much more GPU resources anyway and thus CPU bandwidth doesn't need to improve as fast.. Still with the advent of multi-core CPUs, it HAS to otherwise your gonna end up starving your cores of work the vast majority of run-time..
 
How can you tell by looking at a few screenshots when you clearly don't know whats going on under the hood..?

Uhh, because I think it's safe to assume that in almost 2 years we've seen what the 360 can do (and can't do). The fact that the PS3 hardware is not much more exciting (less so?) than 360 tells me that things aren't going to really get all that much better until new hardware arrives. I don't need to know how things work under the hood. I can just look back on every other console's history and take comfort in knowing that several AAA games have been released for the current generation now.

Maybe if one of the two consoles had significantly superior GPU tech or more RAM, but neither does.

I'm not just looking at screenshots. I've played every significant 360 title (and own a few). From now on out it'll be incremental improvements. We've seen the "next-gen" newfangledness already. Whatever that is.

As for Lair, well, F5 hasn't really ever made a fun game. They make tech demos with shitty arcade controls and no depth whatsoever. I own them all because I'm a graphics whore, or something of the sort. Their best game was their port of Indiana Jones, or according to others, their SNES games. A dragony version of Rogue Squadron is highly non-exciting.
 
I don't need to know how things work under the hood. I can just look back on every other console's history and take comfort in knowing that several AAA games have been released for the current generation now.
So you look back at PS2's software library in it's first 9 months, and from that new exactly what it'd be managing 5 years later?
 
So you look back at PS2's software library in it's first 9 months, and from that new exactly what it'd be managing 5 years later?

Yer I think Swaaye is referring to the 360, as it has GPU power comparable to the PS3 but with added extras such as embedded 10MB RAM and a tessellation unit.
So i think he is saying that we wont see too much better out of the 360, but with the PS3 you have more CPU headroom, with more and more optimizations for the spu's being available later on in its life cycle, we will see better performance = better games.
I'm not saying the 360 wont get any optimizations, but with the PS3 it seems you will get more out of the cell than than you will out of the 360's Xenon tri-core CPU through optimization...
 
Yer I think Swaaye is referring to the 360, as it has GPU power comparable to the PS3 but with added extras such as embedded 10MB RAM and a tessellation unit.
So i think he is saying that we wont see too much better out of the 360...
Seems to me what he was saying is XB360 is pretty much tapped out and PS3 isn't much stronger hardware so won't be able to get much better if at all. And that opinion isn't based on any understanding of the machines' hardwares! The same comparison last gen would be to look at PS2's first year library, look at what GC was achieving, and draw comparisons for all systems by the cycle's end. That totally ignores the hardware and developer aspect though, that PS2 was far more capable than the first year showings. You don't know how much more capable XB360 and PS3 are without understanding the hardwares and following developer comments on how they're using and learning about the systems, which Swaaye seems to consider unnecessary.
 
So you look back at PS2's software library in it's first 9 months, and from that new exactly what it'd be managing 5 years later?

Yes, but firstparty developers have had Cell + nvidia GPU dev kits for 3 years now. These 9 months shouldn't be remotely comparable to the PS2s first 9 months, seing how most of those games were rushed, with short time with the hardware. PS3 developers have had Cell+6800ultra since 2004. A G70 gpu since 2005. And final hardware since early 2006 (- bluray).

PS2 launch games where rushed, by developers with rather little time with final hardware, compared to 2-3years for first party developers.

While games will improve on all platforms, i think its rather far fetched to believe in the same amazing differences in games\graphics that the PS2 had from launch to finish.
 
That's all true (sans the idea that because Cell has been out 3 years, developers have got anywhere near mastering it), though I wasn't likening PS3's potential progress to PS2's (or any other console). Only pointing out that there's room for improvement first and foremost, but mostly that it's ridiculous to try to guess at the future potential of these machines without caring to know how things work under the hood!
 
I'm not just looking at screenshots. I've played every significant 360 title (and own a few). From now on out it'll be incremental improvements. We've seen the "next-gen" newfangledness already. Whatever that is.

Too early to tell. At best, it's your opinion of Xbox 360 games and PS3 trailers/demos. Most of the PS3 games are not even out yet. The differences I noticed so far are:

* LBP's "Game 3.0" or community game play. Echochrome is supposed to have user-generated content too. Let's not forget "Spore" (PC and Wii only ?).

* Judging from trailers, Heavenly Sword seems cinematic (dramatic lighting, convincing skin shaders, life-like motions and emotional "acting" by the cast). To me it's more than pretty picture and smooth animation.

* Eye of Judgement. "Augmented Reality" looks poised for mass adoption starting this year. This is a big area (I know of fascinating projects outside console gaming and they are doable on PS3).

There are also others that I simply do not know enough (e.g., KZ 2's weather effects, Afrika's crowd behaviour ? Heavy Rain's "acting" AI).
 
Last edited by a moderator:
While games will improve on all platforms, i think its rather far fetched to believe in the same amazing differences in games\graphics that the PS2 had from launch to finish.

Having a lot of time on hardware does not necessarily equate to understanding it inside and out, and even if it did, that does not translate to being able to utilise every cycle and feature. Shifting goalposts from both the hardware and the game designers mean spending a lot of time just getting things running at all, and working around bugs or missing features.

The C64 was a machine you could pretty much understand the hardware of just by reading the manual that came with it, and that wouldn't take more than a couple of hours to read through and at most days or weeks to comprehend at a fairly deep level. And yet 25 years later people are still squeezing new tricks out of it.

I think that effect maps just fine to modern hardware - if not more so, because the complexities are few more difficult to really understand and explore, getting access to kit is much harder (you need expensive specialist hardware, generally speaking, to properly code for a modern console - the C64 was programmable from the moment you took it out the box and plugged it in), and the OSs and SDKs evolve over time to give us more access to features and performance.

It would be great if I could write the best possible code for a system with only 2-3 years exposure to it (and exposure to unfinished versions at that) but the real world imposes constraints that mean we'll probably still be learning things about 360 and PS3 when PS6 and XBox Next-Next-Gen II Alpha (Gates memorial edition) are out.
 
I also don't think it should be forgotten that PS3 has a GPU that many devs are familiar with from the PC. It's an architecture that's been around since 2004 or so. I don't think they'll have trouble tapping it out. We also know a lot about G7x and its limitations.

360 might be different in this respect. It's a lot different than R5x0 or R4x0. I think they've pushed it pretty hard in games like GoW, Forza 2, and DIRT. You can pretty much see various limits it has in those games, IMO. Yes I know it's risky to try compare developer efforts, but still. Forza 2 sacrifices some image quality to get 60 fps while DIRT adds some visual quality but is 30 fps or less at times. GoW's most obvious issues are lack of MSAA and notably heavy mip mapping / texture LOD to contain memory usage. That the PC version will have added texture detail tells us that they were struggling with 512 MB RAM.

PS2 GS is a rather weird and quirky GPU. I think initial PS2 games showed just how rough and unknown it was for a while.
 
Last edited by a moderator:
I also don't think it should be forgotten that PS3 has a GPU that many devs are familiar with from the PC. It's an architecture that's been around since 2004 or so. I don't think they'll have trouble tapping it out. We also know a lot about G7x and its limitations.

Is that really the case, I always thought they accessed it thru an API not directly down to the chip.. Of course they know all about the limitations and so on but is the RSX common knowledge for a game developer ?
 
I think the most exotic architecture, gets the most improvement over time...

Ps2 had the vector units, and was hard to program for. then we got god of war 1/2 (5yrs later :) ). Matching/surpassing the visual quality of the original xbox witch came out 1-2 years after ps2...

Now in next gen hardware PS3 has Cell and the 360 has Xenos, both exotic architectures. but I think we will see the most gains out of the SPU's over time.
 
I think the most exotic architecture, gets the most improvement over time...

Its more of an issue with just having the biggest userbase, thus the most dedicated developers, the platform with the biggest userbase gets the most talent, the biggest budget and therefore the biggest improvements.
 
Back
Top