ATI - PS3 is Unrefined

Brimstone said:
I just have a hard time believing that the RSX is the G70 with FlexIO. Months after the nVidia contract came to light, it was revealed Sony was paying for 100 Transmeta engineers to put Longrun2 technology into CELL. After all the time and money invested in the Playstation 3 platform, I hardly think Kutaragi balked at the chance to get a nVidia designed/targeted GPU for the 2006 timeframe.

I highly doubt that it takes a 100 engineers to put in body effect biasing. I would be interested in any pointers you have to this.

Aaron Spink
speaking for myself inc.
 
Will the NDAs lift when the product ships?

Or will they keep it the development hardware specs. and other development details confidential so that future console development can't be deduced from this history with the PS3 development process?
 
aaronspink said:
I highly doubt that it takes a 100 engineers to put in body effect biasing. I would be interested in any pointers you have to this.

Aaron Spink
speaking for myself inc.

Sony (Profile, Products, Articles) and Sony Computer Entertainment announced Thursday that about 100 Transmeta engineers will work with the companies on integrating Transmeta's LongRun2 power-saving technology into future products. Sony will pay "market rates" for the services of those engineers, who will help Sony produce derivatives of its Cell processor, Swift said. Cell is a multicore processor designed for Sony's upcoming PlayStation 3 gaming console, and development partners IBM (Profile, Products, Articles) and Toshiba are also expected to seek out other applications for the chip.


http://www.infoworld.com/article/05/03/31/HNtransmetaservices_1.html


In terms of supporting developers in their use of the Cell processor, Sony is forming an alliance with chipmaker Transmeta Corporation, a company renowned for its software emulation technology and its x86-compatible, software-based microprocessors. Transmeta will be offering an SPE optimizer and software that will let developers effectively program for the Cell processor and its seven SPEs. The tools will allow statistical process control (SPC) simulation on PCs and will also let programmers debug and tune their programs with runtime info. Transmeta's tools will be shipped to developers in Q4 2005.

http://www.gamespot.com/ps3/action/lair/news.html?sid=6129611
 
Since my GDC lecture was referenced earlier...

XYZ RGB has a portable scanner now that can be taken on location, delivering the same scanning resolution as their in-house hardware (it's still not available for sale, though). Takes about 4 seconds, and movement isn't an issue. Other solutions (like the Polhemus scanner) offer a tracker that adjusts for any head movement).
I don't believe that the XYZ RGB resolution is required to convincingly recreate a human face, though. I've done some great likenesses with scanned heads that were decimated to 1mm or less. Modeling the volume of the face is rather forgiving, and the resurfaced mesh can have slight deviations from the real person as long as the rest is on. A good skin shader with matching color data is 90% of the ticket. An XYZ RGB scan will give you all the skin pore information as a bonus, but that part rather easy to fake by the texture artist. You always end up rebuilding the eyes to some degree no matter what scan data you have as source.

I am talking in the context of next-generation games, of course, not super high-end film work. The reason the Spiderman humans looked so good was mostly because of the lighting model/extraction that SPI used, and similar things can be said for the Matrix humans (which were based on XYZ RGB scan data). Highly detailed displacement data for the skin becomes important there. Given the choice, I would pick XYZ RGB data any day, but I don't think that it's necessary for most of the game (or even film) work that you'll see.

Just some general information, none of which is meant to comment on the specifics of PS3 hardware that started the thread (why don't we all just wait for the first games? :)). It's just as applicable to 360 and high-end PC development.
 
Metal said:
I'm not going to take any comment from ATI serious about the competition's product, just as I don't take anything vice versa serious either. However, the statements he makes about the GPU in the Xbox 360 specifically are exactly correct. It's not a GPU they would put in a PC at this time, it's not a GPU that was designed to just be as powerful as hell regardless of costs, and it's just a different beast that's going to take some time for developers to truly understand how to make games for it. Can Sony claim that from their final GPU. All signs point to no that I've seen, but I'm willing to wait until I learn more about it before passing judgement on it. My comment has nothing to do with power. It's about the relationship between costs, being a console specific design, and development difficulty curve with the GPU.


Very well said.
 
therealskywolf said:
Well, for him to make comments like this is because he knows something about the RSX.

Well as discussed earlier, from some of his comments he doesn't seem to know much.

Dave Baumann said:
Well, personally I would have felt that a unified design would work great with Cell given the likely capabilities it has for pushing poly's, probably far more so than the XCPU. That scenario would work well because developers could choose where to devote the geometry processing (Cell, part Cell part graphics, or all graphics) without necessarily wasting any of the processing resources on the graphics chip.

2 of the 3 scenarios you outline are still possible, without "wastage". The vast majority of a G70's power is in its pixel processing as is.

Metal said:
it's not a GPU that was designed to just be as powerful as hell regardless of costs, and it's just a different beast that's going to take some time for developers to truly understand how to make games for it. Can Sony claim that from their final GPU. All signs point to no that I've seen, but I'm willing to wait until I learn more about it before passing judgement on it.

There's such a thing as usage and better usage of any chip. On Xenos, the things devs would have to be explicit about are eDram, the tesslator, in particular. Things like unified shaders are supposed to "just work" without the dev doing anything in particular. I think for both chips, given their programmability, the biggest areas of improvement are likely to come from the software side - better, faster, smarter algorithms.
 
Last edited by a moderator:
Langsuyar said:
Since my GDC lecture was referenced earlier...

XYZ RGB has a portable scanner now that can be taken on location, delivering the same scanning resolution as their in-house hardware (it's still not available for sale, though). Takes about 4 seconds, and movement isn't an issue. Other solutions (like the Polhemus scanner) offer a tracker that adjusts for any head movement).
I don't believe that the XYZ RGB resolution is required to convincingly recreate a human face, though. I've done some great likenesses with scanned heads that were decimated to 1mm or less. Modeling the volume of the face is rather forgiving, and the resurfaced mesh can have slight deviations from the real person as long as the rest is on. A good skin shader with matching color data is 90% of the ticket. An XYZ RGB scan will give you all the skin pore information as a bonus, but that part rather easy to fake by the texture artist. You always end up rebuilding the eyes to some degree no matter what scan data you have as source.

I am talking in the context of next-generation games, of course, not super high-end film work. The reason the Spiderman humans looked so good was mostly because of the lighting model/extraction that SPI used, and similar things can be said for the Matrix humans (which were based on XYZ RGB scan data). Highly detailed displacement data for the skin becomes important there. Given the choice, I would pick XYZ RGB data any day, but I don't think that it's necessary for most of the game (or even film) work that you'll see.

Just some general information, none of which is meant to comment on the specifics of PS3 hardware that started the thread (why don't we all just wait for the first games? :)). It's just as applicable to 360 and high-end PC development.

Welcome to the board !
there are also quite a few threads where Lair is a topic. I remember a recent one where people talked about the skin deformation of the dragons etc.
I know you are probably busy , but maybe you could step in and make some comments in there , if you are allowed to of course.
 
Dave Baumann said:
Curiously I would suggest that the “meatâ€￾ of his argument, given that it was the largest part of that copied text, was him saying that fundamentally RSX is not a custom design, unlike Xenos, hence not as tailored to the specific needs of a console as theirs is. In his reasoning for saying that he is also making several other assertions, such as this was a last minute switch, RSX isn’t fundamentally different from the PC design, further suggesting that isn’t fundamentally different from a GTX and what that encompasses.

If I were one of the ******s in this thread these are actually the issues that I would want addressed in order to gain a little more understanding about RSX, rather than joining in on the mob mentality.

As already pointed out however, this part isn't likely to reach a satisfactory conclusion, because people here either don't know the answers, or would be hacked to pieces by teams of ninjas for revealing them.

However the argument itself is flawed.

Firstly he's talking about the design of a chip that he either doesn't know anything about, has flawed information about, or is willing to lie about (one of these being the case based on what little information he does present being false).

Secondly he's drawing a conclusion (X360 will prove to be much better than PS3) from a slightly strange argument (XGPU is more "bizarre" and "consoleish" than the RSX) which in turn was based on entirely inaccurate assumptions.

Even assuming he was correct - RSX is some kind of PC GPU with minimal changes - then why does that mean it can't hold its own? It's still going to be a modern GPU with all the features developers will expect of it. Anything else would be icing on the cake. ATI certainly appear to have at least a little icing, but I'm not convinced about the rest of their cake being up to snuff - but thats ok because I'm not going to post sweeping statements about it or bizarro-world claims to support them.

You might as well suggest that "in the long run", GS will prove better than the nVidia part in XBox.

The title of the thread is "PS3 is unrefined". I think that's a pretty subjective point anyway. I could describe the PS2 as "unrefined" because frankly it was a balls-out number crunching design with none of the comfort features programmers wanted. I could also say it was very refined, because it was a largely custom design and bore no resemblance to a PC. But since when were PCs "unrefined" anyhow? Look at most any of the other threads about console design and you'll see people arguing about how alien the machine is (be it PS2 vs XBox or Cell Vs XCPU, or whatever - pick anything) being a *bad* thing. Suddenly the opposition has a single part that's slightly more custom than Sony and it's the second coming...

So not only does the argument have no teeth, it's entirely at odds with earlier arguments from the same camp. People are changing the rules to suit the conclusion they want to draw.

I honestly didn't think this thread would make it past the first page... I think maybe people are so desperate either for news, or a good argument, that they'll jump on anything no matter how rediculous. And this is certainly rediculous.
 
Titanio said:
2 of the 3 scenarios you outline are still possible, without "wastage". The vast majority of a G70's power is in its pixel processing as is.
Only if the load exactly matches that of the hardware capabilities, which is very unlikely. Also, PS can stall if there is insufficient VS loads at any point in time.
 
Firstly he's talking about the design of a chip that he either doesn't know anything about, has flawed information about, or is willing to lie about (one of these being the case based on what little information he does present being false).
Even if the information is flawed, doesn’t mean there aren’t elements of truth in there – for instance I would possibly argue that there were three internal contenders before NVIDIA even came into the fray, as I can see only two (and surely one of them wasn’t a contender for more time than it took someone to sketch out a image on a patent?), but from what I’ve heard and from what I’ve been told about it from quarters within NVIDIA I actually do believe the decision was fairly late in the day and that the design isn’t significantly different from what they already have “on the shelf”, so to speak.

Even assuming he was correct - RSX is some kind of PC GPU with minimal changes - then why does that mean it can't hold its own? It's still going to be a modern GPU with all the features developers will expect of it. Anything else would be icing on the cake. ATI certainly appear to have at least a little icing, but I'm not convinced about the rest of their cake being up to snuff - but thats ok because I'm not going to post sweeping statements about it or bizarro-world claims to support them.

You might as well suggest that "in the long run", GS will prove better than the nVidia part in XBox.
Well, hardly, we are talking about parts whose shader metrics may well turn out to be very similar at a hardware level, just how those are accessed and harnessed are quite different. I think there is good chance that other metrics may also end up similarly as well.

He’s also addressing other points there as well, which I think point towards this. His earlier comments about “performance cliffs” are almost certainly addressing the register limitations that NVIDIA have suffered from in the past – although NV40 and G70 were far better in this regard than NV30, it can still prove to be a big barrier in some cases (look at G70’s GROMACS performance). Elements such as these are not really of much of an issue in the PC space since the longevity of the part is limited – realistically G70 on the PC will last no longer than 12-24 months (if that) and such issues are not going to be of concern because largely the part is running, and being benchmarked on, effectively legacy applications. PC design is very much “accelerate now, give an insight to the future” because of the relatively transient nature of the design of a specific part.

But since when were PCs "unrefined" anyhow? Look at most any of the other threads about console design and you'll see people arguing about how alien the machine is (be it PS2 vs XBox or Cell Vs XCPU, or whatever - pick anything) being a *bad* thing. Suddenly the opposition has a single part that's slightly more custom than Sony and it's the second coming...

So not only does the argument have no teeth, it's entirely at odds with earlier arguments from the same camp. People are changing the rules to suit the conclusion they want to draw.

You’re right, suggesting that there may be something akin to a PC graphics chip in PS3 this time last year would have met you with no end of vitriol from some quarters. Suggesting that a part specifically tailored to graphics demands there could have ended in pages and pages of arguments before then.

I honestly didn't think this thread would make it past the first page... I think maybe people are so desperate either for news, or a good argument, that they'll jump on anything no matter how rediculous. And this is certainly rediculous.
From apparently ridiculous thread I take note of the type of people that have come out to decry it.
 
Fafalada said:
Given how well a Voodoo1 and Alladin7 held their own against NV27.5 this console generation (and with less then half the memory to boot), that looks like a very irellevant difference. :cool:
Without even talking about forward looking features in Xenos, dynamic branching and vertex texturing are SM3 features which RSX won't have the measure of, but are the baseline of Xenos.

Or maybe you're trying to suggest that graphics programmers can't do fantastic things with features that are beyond SM2 GPUs.

Jawed
 
Dave Baumann said:
You’re right, suggesting that there may be something akin to a PC graphics chip in PS3 this time last year would have met you with no end of vitriol from some quarters.

We knew nVidia and Sony were working on a GPU based on their then next-generation tech this time last year. The news was well received, generally.

I'm with Wibble, this thread is silly. It's the reason we had dedicated PR threads previously..perhaps its time to bring them back.
 
Titanio said:
We knew nVidia and Sony were working on a GPU based on their then next-generation tech this time last year. The news was well received, generally.
Sorry, a little prior to the NVIDIA announcment then. :rolleyes:

I'm with Wibble, this thread is silly. It's the reason we had dedicated PR threads previously..perhaps its time to bring them back.
Its as silly as those that want to make it silly try and do so. I'd say there has been quite a lot of information in this thread despite is apparent "sillyness".
 
Jawed, perhaps the whole point of all the talk about Cell and RSX working in paralel is about overcoming any shader limitations that the GPU would have? This could actually make the PS3 even more capable than Xenos in itself, for example there was that possibility of deferred shading Faf has mentioned, with SPEs using data generated by the GPU...
 
Dave Baumann said:
Sorry, a little prior to the NVIDIA announcment then. :rolleyes:

I don't know..it had been enthusiastically speculated upon since 03 even (with nVidia even specifically pointed to). I'm sure some mightn't have been pleased at the prospect, but I doubt they're any more pleased now. The majority thought of it as a welcome relief, I think.

Dave Baumann said:
Its as silly as those that want to make it silly try and do so. I'd say there has been quite a lot of information in this thread despite is apparent "sillyness".

True, but I think it's a pity it took the ramblings of Richard Huddy to tease out the little bits of new info we've got from here.
 
Last edited by a moderator:
Dave Baumann said:
Curiously I would suggest that the “meatâ€￾ of his argument, given that it was the largest part of that copied text, was him saying that fundamentally RSX is not a custom design, unlike Xenos, hence not as tailored to the specific needs of a console as theirs is. In his reasoning for saying that he is also making several other assertions, such as this was a last minute switch, RSX isn’t fundamentally different from the PC design, further suggesting that isn’t fundamentally different from a GTX and what that encompasses.
Just how much custom design does a console GPU need these days? As far as I can see, there are three areas that can be customized for a console
1. Consoles can target a fixed resolution that historically has been lower than that of computer screens. This is still true this generation, but for Sony and MS it's not true to nearly the same extent that it was. Nintendo stand to gain more from this. As far as we know, ATI has used this for one thing mainly - helping AA. To public knowledge, the RSX does not have any particular feature that is tailored to having known lower resolution bounds for rendering.
2. Consoles can implement proprietary hardware features that does not have support (yet) in any "industry standard" API. I'm not expert enough to know what might be useful that isn't (projected to be) present in Direct3D. It would tend to make sense that if something is useful in consoles, then it would probably be useful for PCs as well. But at least, the console could implement the feature earlier, not being constrained by PC API release schedules. The rumours surrounding the RSX imply that it hasn't much in the way of such features. We'll see to what extent that is the case, and more importantly whether it makes any real life difference.
3. A console can take a completely different tack on how to distribute workloads between CPU and GPU, in fact, it is architecturally very much more flexible than a PC and could concievably dispense entirely with this separation of functionality. As far as I can see, the PS3 is both more sophisticated and more capable in terms of data flow than the 360. The RSX is not as interesting as the Cell but those 35GB/s of dedicated interprocessor bandwidth is pretty remarkable, and implies at least a potential for distributing workloads that would be impossible on a PC. In this regard, the 360 and Xenos are more conservative and PC integrated graphics like in their architecture.

Have I missed something? The first two don't seem as they could give a whole lot if you target HD resolutions but the publicly available data say that the ATI design use these more than the RSX. The third could concievably give a whole lot more in terms of custom design, but Sony didn't go as far as some thought. Still the Cell+RSX combo and interprocessor communication definitely goes further in opening up new possibilities through architecture.

It seems to me the factual statements of Richard Huddy were flat out wrong, or lies, depending on your view of him. Lies, I'd say, I think it can be assumed that he knew full well about the E3 demonstrations. The "unrefined" - well, it's in the eye of the beholder. He may have a point, but it is arguable in itself, and even more so in terms of relevance.

Besides, "unrefined" has little bearing on power, there's a reason we say "brute" in brute force. :) Rather it signifies engineering efficiency. The XBox was unarguably the most "unrefined" piece of kit this generation, it was arguably the most powerful nevertheless, and unarguably lost the most money, cost being one reason. But "unrefined" doesn't imply diddly squat about the capabilities of the system.

MrWibble said:
I honestly didn't think this thread would make it past the first page... I think maybe people are so desperate either for news, or a good argument, that they'll jump on anything no matter how rediculous. And this is certainly rediculous.
Quoted for Truth.
Buta good argument shouldn't be underestimated. Isn't it as good a way as any to pass the idle hours after Christmas as long as the level is kept reasonably high? :)
I really would like to hear peoples opinions about what optimizations are available for consoles.
 
Laa-Yosh said:
Jawed, perhaps the whole point of all the talk about Cell and RSX working in paralel is about overcoming any shader limitations that the GPU would have? This could actually make the PS3 even more capable than Xenos in itself, for example there was that possibility of deferred shading Faf has mentioned, with SPEs using data generated by the GPU...
Hey I'm with you on that - I'm a champion of the "Cell and RSX together can do great things" cause. Though there aren't any specific descriptions of techniques that will build upon this relationship. Early days.

But it doesn't get round the fact it's making up for a shortfall in RSX. And Huddy was comparing GPUs.

While Xenos is moving in the direction of shifting graphics processing on to the GPU, RSX is more reliant upon the CPU. Luckily Cell is a vector-ops monster - though I'm still convinced that one GPU FLOP is worth about 2 SPE FLOPs - Fafalada keeps moaning about how backward VMX is...

Xenos also has tight integration with Xenon - PS3 doesn't have any advantage there.

It's worth noting that Xenos can't do predicated tiling all on its own - it requires Xenon to "fix-up" the vertex buffers after they've been marked up by Xenos (though in terms of computing power this is laughably trivial). So Xenos is being helped by Xenon in this case.

Jawed
 
Jawed said:
Xenos also has tight integration with Xenon - PS3 doesn't have any advantage there.

It's worth noting that Xenos can't do predicated tiling all on its own - it requires Xenon to "fix-up" the vertex buffers after they've been marked up by Xenos (though in terms of computing power this is laughably trivial). So Xenos is being helped by Xenon in this case.

Jawed

how many bandwidth current cpu (pc) have to Gpu? (pciexpress is 2X2Gb/s? my memory fail again)
Gpu in XBOX360 can read from L2 cache.
We known that the bandwidth is 10.6gb/s up + 10.6gb/s down to northbridge/gpu.
In regard of bus clock, and a guessed proportion of l2 locked for be stream to gpu, can somebody make a calculation of how many data can be send to gpu in regard of the 10.6gb/s up available

edit et hors sujet:
Wisez il me semble que tu habites en FRance Paris?
Connais tu des sites interessants et accessibles en francais qui me permettrait d'ameliorer mes connaissances sur l'architecture des cartes 3d.
J'ai trouvé des trucs interessants sur onversity et un article sympa sur MATBE, sur ton site (ie beyond3d) j'ai pas trouvé un descriptif complets du fonctionnement des cartes compatibles dx9.
Sinon les articles sur (exemples) Hardware geometry processing et directX next gen donnent de bons indices ;)
DSL pour la petite infraction et le hors sujet)
 
Last edited by a moderator:
Dave Baumann said:
Even if the information is flawed, doesn’t mean there aren’t elements of truth in there – for instance I would possibly argue that there were three internal contenders before NVIDIA even came into the fray, as I can see only two (and surely one of them wasn’t a contender for more time than it took someone to sketch out a image on a patent?), but from what I’ve heard and from what I’ve been told about it from quarters within NVIDIA I actually do believe the decision was fairly late in the do and that the design isn’t significantly different from what they already have “on the shelfâ€, so to speak.

And maybe that's true and maybe it isn't. Perhaps if we shone a desk lamp into the eyes of certain people around here they'd reveal such secrets... however I don't necessarily think it'll get us anywhere. Yes, if we knew how late/early in the day nVidia started to work on the RSX maybe it'd give us a clue as to how close it really is to being "just a G70" or whatever... but in all honesty we'll probably know that soon enough anyway when more details are revealed. "What does it do" is a far more interesting question than "When was the design process started".

Seems like an odd thing to get hung up on.

Elements such as these are not really of much of an issue in the PC space since the longevity of the part is limited – realistically G70 on the PC will last no longer than 12-24 months (if that) and such issues are not going to be of concern because largely the part is running, and being benchmarked on, effectively legacy applications. PC design is very much “accelerate now, give an insight to the future†because of the relatively transient nature of the design of a specific part.

You're rght - 12 to 24 months after any part launches, it'll be eclipsed by the new kids on the block. The difference is that in the PC space things her replaced, whereas with a console we're stuck with it for 5 years and comparing it to PC tech makes it look increasingly obsolete.

But I'm going to stick my neck out here and suggest that the XGPU is going to suffer exactly the same fate at pretty much the same time. While it may be more radically different to a PC GPU than RSX (and please note, I really don't know if that's true or not), even if it is, it's not using some kind of magic technology that's going to keep it ahead of the curve for any length of time.

I think in order to do that you really have to go down a route that no-one else is even thinking about taking, and in doing so accept that for mainstream stuff you might not compete just for the sake of being a winner in some other arena. I think PS2 did this to some degree with the silly amounts of bandwidth/fillrate it had - even late on in it's lifetime it had fillrate to die for, but with a few exceptions the image quality overall was clearly inferior to anything else.

XGPU's design may have been in design for longer and be more "consoleish", than RSX, but I think it's just going to mean differences in how to target it for developers - and not all of the effects will be positive. For those targetting cross-platform, it may come off worse (just as people are arguing that Cell will do for PS3 - though I might argue that going multi-core PPC is also not exactly like writing for a P4).

If anything what they have is not so much a chip that is better, but a chip that may be more cost effective.

So I think that based on approach, even if you take Huddy's comments at face value, the differences will be minor and will work both ways. I doubt either chip will be a nightmare to use, so any difference at all in the long term ought to be decided by the raw power available and not the peculiarities of design.
 
the battle begins

I would love to add something to this thread but I can't.

Except my console is better than yours. OK!

Seriously though this thread is good and I like it if only for the fact that there is some good information and no one is swearing at each other.
The facts are, a lot is known about Xbox 360 by the public with regards to the basic techical specifications.
Not much is known about the RSX but there is a wealth of information regarding what makes up the Cell architecture.
People on this forum should also realise that there are certain members here privy to information so look at their posts or.. lack of posts and deduce from there.

Everyone [else] is really talking asshat... yep even me.
 
Back
Top