Ati on Xenos

jvd said:
anyway i don't know how heavly modified the rsx will be so it may have thigns that are redundant because they were intergrated into the chip and couldn't be umm unintergrated.

This is an interesting point. Assuming 18 months of work, assuming a new design based on another architecture/chip, while designing it can you take out the "redundant" stuff? I'm sure RSX is a different chip than the PC cards, but based on the same tech - and ~18 months seems like enough time, considering refreshes happen in less time..(and there may have been more). I don't know how modular architectures/designs are though in terms of removing things..(?)
 
well the video encoding options , i don't know how intergrated they are into the chip. I don't know if it can be removed from teh design. While its a great boon for the pc with the cell its really not needed but what if its intergal to the pipe line structure ?
 
jvd said:
well the video encoding options , i don't know how intergrated they are into the chip. I don't know if it can be removed from teh design. While its a great boon for the pc with the cell its really not needed but what if its intergal to the pipe line structure ?

I haven't a clue either. I guess again this is something we won't have an answer to til if and when NVidia or Sony starts talking about RSX, or some charitable dev spills some beans..
 
Shifty Geezer said:
What percentage of current GPU's is sitting around idle on average at the moment then?

Well, therein lies the rub - its not as though I can go up to the graphics vendors and ask them "So, exactly how inefficient is your architecture" and get a reply that doesn't say "graphics is inhernatly parallelisable, so we're never inefficient". However, now we are getting down to pretty low level detail on exactly how to best deal with the inherant latencies with shader processing and how to best handle those latencies.

nVidia have claimed they've looked at unified shaders but the lack in effeciency compared to customised shaders they felt meant it couldn't outperform a conventional architecture. I find that hard to swallow unless unifed shaders are about x% slower than customised shaders where x% is the percentage that shaders are idle on conventional GPUs.

Actually I believe their main complaint was about the different types of demands VS and PS plce on the prcoessor and how the ALU's need to hanlde the latencies differently because PS usually has filtered texture lookups which are usually the most latency bound of operation. However, ATI's point is that they have removed the ALU and texture link by having them separated - if there is a texture instruction in a shader program the texture samplers will be tasked with retreiving that texture data, meanwhile a completely independant theads are working on the ALU's; when the texture data is ready the shader program that requested that data can go back into context, run on the ALUs and the the texture data will already be buffered and usable immediately when the shader instruction that does operations on that data is being processed.
 
jvd said:
I would think neither does RSX, unless Sony&NVidia had monkeys working on the chip.

Do the people who made the geforce fx count ? ;) anyway i don't know how heavly modified the rsx will be so it may have thigns that are redundant because they were intergrated into the chip and couldn't be umm unintergrated.

Hey remember that before R500, there was R400... ;)

So it's not like things cant go wrong on either side.

By the way, looked up the Purevideo transistor count on the GeForce 6 chips and it's roughly ~20 million.
 
P1010276.jpg


Dave,

I can see the inherent advantages of decoupling the shader ALUs from texture ALUs.

I can also see the advantages of specialised shader ALUs and unified shader ALUs. However, the RSX architecture above also shows a 'decoupling' of texture ALUs but isn't clear on whether the shader ALUs are specialised or unified. So I can see two paths here for optimisation/efficiency,

1. Decoupled texture ALUs VS Coupled texture ALUs

2. Unified Shader ALUs VS Specilaised Shader ALUs

Both RSX and Xenos look like having 1 but 2 isn't clear? :?

I have yet to hear any of your comments on RSX on that note?
 
Hey remember that before R500, there was R400...

So it's not like things cant go wrong on either side.

By the way, looked up the Purevideo transistor count on the GeForce 6 chips and it's roughly ~20 million.
oh no doubt. However its the fact that ati scrapped the r400 or postponed it leaves me feeling that the xenos will work as intended and looking at the problem from a semi diffrent aproach may help alot .

Anyway we really don't know how either the rsx and the xenos work. Looking at transtor counts alone wont tell the whole story and i don't think even in the end it will be as clear cut as one being more powerfull than the other. I think u will find that they both end up edging out the other while doing diffrent tasks
 
I would say that the image above is a picture to display a concept, not an architectural representation.
 
DaveBaumann said:
I would say that the image above is a picture to display a concept, not an architectural representation.

Well that could be true but the image does very little to convey anything that could be further away from points 1 and 2. Especially point 1.

Though apparently this is a clue to the architectural differences between Xenos and RSX,

MrFloopy said:
Yes there was a slide under the presentation "independet pixel/vertex shader pipelines".

How independant! That is the question you need to answer.
Answer that and the difference between two systems will also be revealed.

http://www.beyond3d.com/forum/viewtopic.php?p=522443#522443

So, "How independant?", is apparently the difference between Xenos and RSX...?

I look forward to your Xenos article, but I also hope you can do a thorough article on RSX when info becomes available... :p
 
Jaws, I can't talk because I'm under NDA, but given what I know about upcoming parts and what we hear about RSX I think you can "colour me surprised" if that diagram is a representation of architectural operation.

As for an article on RSX - given all the commentry from NVIDIA do you actually expect it to be significantly different from their PC parts?
 
DaveBaumann said:
Jaws, I can't talk because I'm under NDA, but given what I know about upcoming parts and what we hear about RSX I think you can "colour me surprised" if that diagram is a representation of architectural operation.

Well that NDA explains alot! :p

DaveBaumann said:
As for an article on RSX - given all the commentry from NVIDIA do you actually expect it to be significantly different from their PC parts?

Well the curiosity is on three levels,

1. How different the G70 is from NV40?

2. How different RSX is from G70/ G80/ WGF 2.0, with it's implementation/customisation with CELL and OpenGL|ES?

3. PS3 and X360, different architectures, same result?

So I take this as a hint that a RSX article is unlikely?
 
Just suggestions..

If RSX is too similar to PC technology being covered anyway, looking at the system (PS3) as a whole with regard to graphics as opposed to just the GPU in isolation could be worthwhile. Coverage/Analysis/Investigation of the relationship between Cell and RSX wrt graphics, the role Cell can play there etc. is very lacking (well, all coverage is tbh, but that may be symtomatic of Sony/NVidia's stance on information release rather than a lack of effort on the part of websites). Cell is arguably a step closer to the "CPU that's like a GPU", architecturally, and Sony evidently sees a role for it in graphics beyond the norm for a CPU. So that is something "different" that could be worth looking at. But perhaps curiousity in that area isn't widespread..just my own personal thought, perhaps!

RSX itself beyond that could be treated as any "refresh" is.
 
Shifty Geezer said:
Exactly. So how come they came to different opnions? I can only conclude the actual difference in the performance of the architectures doesn't amount to much!

Because they both envisage different usage patterns.
Because neither one can see the future so they have to assume and extrapolate.

Ideas can be great on paper and suck in practice. And sometimes stupid brutefore solutions are the best way to go.
 
So I take this as a hint that a RSX article is unlikely?

I don't know yet, we'll see if/when people start talking a little more about RSX.

With Xenos we have some different processing elements to consider: eDRAM / MSAA / Z pass / Tiling is one element to consider for Xenon and then there is the shader architecture which is a bit out there, and I firmly believe this will give us a glimpse to ATI's PC future (important to this site). RSX's pixel operation looks to be more in line with other processors, so there probably isn't much to look at there and the architecture appears to be PC derived so it will probably be covered in one form or another sooner or later - if its later (but before its covered from the PC side) then I'd like to do the same (if possible), but my suspicion is that its sooner.
 
Well, at the end I am almost sure that you will talk of it in the forums (and for a "variant" should be enough, like nv2a for XB), we later could just make a colection of posts, that should be enough and it would save a lot of time.
 
If you look at the financials of the Sony/Nvidia deal it would conclude that Sony got a PC derived part and not some exotic design ala Xenos. Also I think the CG libraries were very important to Sony as they can't afford a 2 year grace period for devs to get good visuals.

I think RSX is a marriage of convience. Sony was too proud up till the 11th hour to admit it needed a decent GPU and Nvidia was to cheap to go with a tech licensing deal... that is until they saw they may not end up in any console except the Phantom and maybe a little worried about consumer perception. In the end Nvidia ate crow but atleast this time they didn't pay too many engineers to cook it.
 
Pozer said:
If you look at the financials of the Sony/Nvidia deal it would conclude that Sony got a PC derived part and not some exotic design ala Xenos. Also I think the CG libraries were very important to Sony as they can't afford a 2 year grace period for devs to get good visuals.

I think RSX is a marriage of convience. Sony was too proud up till the 11th hour to admit it needed a decent GPU and Nvidia was to cheap to go with a tech licensing deal... that is until they saw they may not end up in any console except the Phantom and maybe a little worried about consumer perception. In the end Nvidia ate crow but atleast this time they didn't pay too many engineers to cook it.

is there any way to know or link that Sony admitting they needed a decent GPU and didnt plan for nvidia until late last year
 
i read they had filed patents regarding using 3 cells in the ps3, 2 would act as the GPU.

Then last year they signed a deal with nvidia to make a GPU, and only went with 1 cell processor.

So i think most of the speculation is based on the patents Sony filed.
 
that is a reasonable way to determine the order of events. I personally think that Nvidia had a proposal on the table all along. I think that Sony decided to go with Nvidia because they bring alot to the table including a shader language, Linux expertise, and console experience. Furthermore, if anyone could design a bus between a GPU and a CPU it is Nvidia.
 
DaveBaumann said:
So I take this as a hint that a RSX article is unlikely?

I don't know yet, we'll see if/when people start talking a little more about RSX.

With Xenos we have some different processing elements to consider: eDRAM / MSAA / Z pass / Tiling is one element to consider for Xenon and then there is the shader architecture which is a bit out there, and I firmly believe this will give us a glimpse to ATI's PC future (important to this site). RSX's pixel operation looks to be more in line with other processors, so there probably isn't much to look at there and the architecture appears to be PC derived so it will probably be covered in one form or another sooner or later - if its later (but before its covered from the PC side) then I'd like to do the same (if possible), but my suspicion is that its sooner.

When will your eagerly awaited article be up on the website, DavE? :oops:
 
Back
Top