where they sacrificed some features to provide room for others.
I dunno!!! I'm not saying they should have done anything differently! I said I like the design! Just pointing out that the OP's statement of trading those transistors for eDRAM instead of something else is valid, whereas jvd seems to think there wasn't any trade at all. Hell, they sacrificed 100 million transistors of ADD gates, and rightly too, but it was still something they could have done differently.Some features like what exactly?
Lazy8s said:The premise for this line of reasoning is confused anyway. The eDRAM is not some feature-trade off for AA; it's a repositioning of the pipeline to keep bandwidth intensive operations off the external bus. It impacts the whole rendering scheme.
Actually it isn't. It's quite simple (and I can't figure out why someone like you would be ignoring it or simply arguing semantics for the sake of arguing).
Qroach said:Nobody here is saying there aren't tradeoffs.
That's what I'm responding to. jvd says the core is already bigger than existing ATi GPU, so the eDRAM was just extra, as though Xenos core was gonna be 232 m Tranny's regardless of whether there was eDRAM or not. Whereas, as you agree, if ATi didn't have that eDRAM they could use 100 million extra transisitors of die space for something other than backbuffer storage and processing.jvd said:did they though ? The xenos is bigger than the r420 .
232m vs 160m
So its already much bigger than thier previous chip . I don't really see them sacrifing transitors for edram and it looks like they wont be sacrifing yields as they are two diffrent chips
Qroach said:ok first make up your mind here, what are you both arguing?
1. Are ATI sacrificing transistors for the daughter die?
2. Are ATI sacrificing graphics features for the daughter die?
Sony says use our hardware however you see fit, MS says use it how we tell you.
NucNavST3 said:3roxor said:Remember that computer graphics = all about optical illusion and the bigger the distance the lesser the visible jaggies.
That "free" 4* AA is a great thing for the xenos GPU allthough I don't know if it's that usefull with HDTV screens (or normal screens).
The recommended viewing distance is 8 to 12ft. on a 42" plasma tv and 12 to 16ft. or more on a 50" screen. (Most people also watch TV at these distances)
This gen most people when playing on their Xbox/Ps2 go grab an extra chair (or sit on the ground) with their heads almost glued to the screen mainly because of the controllers. Next gen we have wireless controllers so you can expect people to sit at the reccomended distance.
I hereby invite everyone to my house to see just how shitty no AA looks on HD sets, you can sit as far back as you want... I'm having a hard time seeing how AA is a bad thing, or an unneeded feature.
If ATI can make a comparable sized GPU that has an eDRAM daughter die that completely takes care of AA, etc. and bandwidth bottlenecks AND can match the shader output of nVidia's single conventional GPU AND release it 6 months earlier than I'm sorry, but nVidia has no business being in the graphics processor business.Qroach said:I wouldnt be so bold to say that PS3 will out perform xbox360 in shader performance.
seismologist said:For me, the lack of AA is barely noticeable at HD resolution.
So couldn't that die area be used for something more useful?
Qroach said:...
I wouldnt be so bold to say that PS3 will out perform xbox360 in shader performance.
...
Oda said:The way I see it (and please correct me if I'm wrong) is that the two different design choices are relative to each companies ideology on game development in the coming gen.
The PS3 is designed with a flexible architecture, as nothing is dedicated to one or two specific tasks. There is plenty bandwidth to add in copious amount of AA, HDR and/or AF, or one can choose to utilize more of the massive shader power.
The 360 on the other hand is designed around MS' idea that all their games need to have 4xAA at 720p, so thus they included less shader power than the PS3 in order to include the eDRAM. They also provided less bandwidth, but thats due to the eDRAM taking care of the backbuffer.
Sony says use our hardware however you see fit, MS says use it how we tell you. Simple as that I think.
Quite honestly I believe the real reason that MS wanted and EDRAM design was because XGPU's biggest weakness was transparent fillrate.