Is free AA really worth it?

Status
Not open for further replies.
where they sacrificed some features to provide room for others.

Some features like what exactly? I have to admit lazy 8 hit the nail on the head. This argument exposes personal preference and nothing more.
 
Some features like what exactly?
I dunno!!! I'm not saying they should have done anything differently! I said I like the design! Just pointing out that the OP's statement of trading those transistors for eDRAM instead of something else is valid, whereas jvd seems to think there wasn't any trade at all. Hell, they sacrificed 100 million transistors of ADD gates, and rightly too, but it was still something they could have done differently.

As they OP asked, what could they have done differently? I already answered this. I'm now whinging about jvd suggesting Xenos would be exactly the same if they didn't include eDRAM, as though the XB360 was gonna have 250 milion trannies no matter what and the extra 100 for eDRAM was a bonus, instead of 'We can squeeze 300+ million trannies onto this die. What will we add and what will we leave out?' They had a tranny budget, they had things they could have added or not (number shader units, number texture units, functions in those units, local storage...) and they choose 10 MB/s local storage and a bunch of logic instead of spending those trannies elsewhere.
 
(my post was obviously directed at Qroach, not Shifty)

Actually it isn't. It's quite simple (and I can't figure out why someone like you would be ignoring it or simply arguing semantics for the sake of arguing).

Every thing comes down to "design choice". When a choice is made to make a balanced system - there are tradeoffs being made - and these tradeoffs come at a price (in this case, transistors devoted to a daughter die with lots of eDRAM and logic).

They traded of heaps of transistors in the PS2's GS for eDRAM just as they are trading of transistors in Xenos for its daugther die. At the end of the day, Nvidia will be putting a 300 million GPU into the PS3 and ATi a 230million + 100 million one. Obviously, ATi could have went for a equally sized single die GPU as well, but they chose not to. At the end of the day, ATi traded off transistors for free AA etc at the cost of using more transistors for more unified shaders (or whatever they could have done with it).

Nothing is "free".
 
Lazy8s said:
The premise for this line of reasoning is confused anyway. The eDRAM is not some feature-trade off for AA; it's a repositioning of the pipeline to keep bandwidth intensive operations off the external bus. It impacts the whole rendering scheme.

Maybe my premise is a bit confused. I was under the assumption that the most bandwidth intensive operation is the 4x supersampling for the AA itself.

My impression is that they tried to get a little bit of everything in this design which is good for a balnced design. Maybe not ideal from raw performance standpoint. Maybe I'm wrong I havne't really read much about this but I assumed that if this GPU design was so ideal it would be adopted by future PC GPUs. Ihaven't heard anything about ATI moving to EDRAM in their PC product line yet.

Oh I agree that AA is noticeable at HD resolution but to what extent? Whenever I have to make a tradeoff for performance in any of my games AA is usually the first thing to go.
 
I think it's a good thing. 4xAA is just about when AA starts becoming effective IMO. Xenos is 4xMSAA, right? I think FSAA is better, but 4xMSAA is awesome too. The tradeoff in tranny logic is fine if you go on the assumption that the extra shader power would be wasted through 4xMSAA or 4xFSAA anyway. Lower efficiency might drop the projected levels to what Xenos is gonna deliver anyway, so it might not be a tradeoff at all. Matter of fact, it might end up running better with 4xAA now than it would have otherwise. If it was just 2xAA, then I'd agree. But I have to say that ATI's design looks fab the way it is now. PEACE.
 
Actually it isn't. It's quite simple (and I can't figure out why someone like you would be ignoring it or simply arguing semantics for the sake of arguing).

Have a look here, Phil I'm not the one arguing symantics. I quoted you specifically as saying "sacrificing features:. What "features" are those. I'm not talking about transitor space. You say they are sacrificing features. you're being a tad more specific then I was expecting. I'm willing to bet you simply don't know what features those are.

Of course there are tradeoffs. Sure, there always are, but when you come out and say they "sacrificing features" without having any idea what those features are, then you are just arguing for the sake of arguing. From what I see of you and shifty in this thread, you're both arguing a point that isn't logical. Nobody here is saying there aren't tradeoffs. What we're saying is exactly what features do you think they are sacrificing.

It's a full featured graphics chip that goes beyond anything currently available today. If they are indeed "sacrificing" something, at least have an idea what that is, because it seems like you are trying to intentionally find something to complain about.

You can't really compare what was done with the PS2 graphic schip as it couldn't even be considered a "full featured" graphics chip by DX and openGL standards. compared to graphics chips available at the same time.
 
Qroach said:
Nobody here is saying there aren't tradeoffs.
jvd said:
did they though ? The xenos is bigger than the r420 .

232m vs 160m

So its already much bigger than thier previous chip . I don't really see them sacrifing transitors for edram and it looks like they wont be sacrifing yields as they are two diffrent chips
That's what I'm responding to. jvd says the core is already bigger than existing ATi GPU, so the eDRAM was just extra, as though Xenos core was gonna be 232 m Tranny's regardless of whether there was eDRAM or not. Whereas, as you agree, if ATi didn't have that eDRAM they could use 100 million extra transisitors of die space for something other than backbuffer storage and processing.
 
ok first make up your mind here, what are you both arguing?

1. Are ATI sacrificing transistors for the daughter die?

2. Are ATI sacrificing graphics features for the daughter die?
 
Qroach said:
ok first make up your mind here, what are you both arguing?

1. Are ATI sacrificing transistors for the daughter die?

2. Are ATI sacrificing graphics features for the daughter die?

Either/or, though I wouldn't necessarily characterise it as a feature sacrifice versus a potential shader performance sacrifice.

It's pointless comparing Xenos to the R420, you gotta think about what might have been otherwise.
 
I'm arguing 1 and 2, but for semantics purposes with jvd. If ATi hadn't included eDRAM, they would have had another 100 million trannies for other 'stuff', which would have had other features - maybe another cluster of 16 ALU's for a fourth unified shader. Whether that would have beena better use of resources, I have no idea. I don't think anyone knows until the realworld practical performance of Xenos is known. But like I said, I think Xenos is a great design and I can't see anything wrong with it myself.
 
The way I see it (and please correct me if I'm wrong) is that the two different design choices are relative to each companies ideology on game development in the coming gen.

The PS3 is designed with a flexible architecture, as nothing is dedicated to one or two specific tasks. There is plenty bandwidth to add in copious amount of AA, HDR and/or AF, or one can choose to utilize more of the massive shader power.

The 360 on the other hand is designed around MS' idea that all their games need to have 4xAA at 720p, so thus they included less shader power than the PS3 in order to include the eDRAM. They also provided less bandwidth, but thats due to the eDRAM taking care of the backbuffer.

Sony says use our hardware however you see fit, MS says use it how we tell you. Simple as that I think.
 
your just speculating as I can see. I think the ATI made some good decisions with this chip and to say they traded off features without knowing what those features are, isn't a fair assesment to make at all.

Compare this chip to what is available today and see if the sacrificed features. I don't think they did anywhere. It has all the features of a current graphics chip and well beyond. So to say they could have stuck in more shader power is a completely un fair thing to say when you have no idea what the shader performance actually is.
 
I think your assesments are in correct.

MS/ATI designed their chip to remove bottlenecks from areas that degrade performance, such as the largest bandwidth using portions of the graphics pipeline. I wouldnt be so bold to say that PS3 will out perform xbox360 in shader performance.

Sony says use our hardware however you see fit, MS says use it how we tell you.

I'm sorry but that is just wrong.
 
NucNavST3 said:
3roxor said:
Remember that computer graphics = all about optical illusion and the bigger the distance the lesser the visible jaggies.

That "free" 4* AA is a great thing for the xenos GPU allthough I don't know if it's that usefull with HDTV screens (or normal screens).

The recommended viewing distance is 8 to 12ft. on a 42" plasma tv and 12 to 16ft. or more on a 50" screen. (Most people also watch TV at these distances)

This gen most people when playing on their Xbox/Ps2 go grab an extra chair (or sit on the ground) with their heads almost glued to the screen mainly because of the controllers. Next gen we have wireless controllers so you can expect people to sit at the reccomended distance.

I hereby invite everyone to my house to see just how shitty no AA looks on HD sets, you can sit as far back as you want... I'm having a hard time seeing how AA is a bad thing, or an unneeded feature.

Well but your point would at the very least have to be reevaluated come actual HD games on that HD set. SD resolution games scaled up to HD resolutions without AA are going to be looking a fair deal worse than actual HD games without AA.

I'm not saying it won't look better with the AA, because of course it will, but maybe it won't qualify for 'shitty' any longer.
 
Qroach said:
I wouldnt be so bold to say that PS3 will out perform xbox360 in shader performance.
If ATI can make a comparable sized GPU that has an eDRAM daughter die that completely takes care of AA, etc. and bandwidth bottlenecks AND can match the shader output of nVidia's single conventional GPU AND release it 6 months earlier than I'm sorry, but nVidia has no business being in the graphics processor business.

If they somehow do end up doing this, then hats off to them, and to nVidia, I'd say 'go home'. Excuse me if I'm not insanely skeptical though.
 
seismologist said:
For me, the lack of AA is barely noticeable at HD resolution.
So couldn't that die area be used for something more useful?

WHat games are you playing and how big is your TV?

Current HD games are, for the most part, a complete mess as far as aliasing goes.
 
Qroach said:
...
I wouldnt be so bold to say that PS3 will out perform xbox360 in shader performance.
...

I'm of the belief that you don't isolate 'one' system component and analyse it's strenghts and weaknesses without it impacting 'other' components. And on that note and talking about theoretical shader performance, if you take CELLs SPUs and the thought that they can run Cg shaders, then yes, PS3 has plenty of shader 'headroom' against the X360.
 
Oda said:
The way I see it (and please correct me if I'm wrong) is that the two different design choices are relative to each companies ideology on game development in the coming gen.

The PS3 is designed with a flexible architecture, as nothing is dedicated to one or two specific tasks. There is plenty bandwidth to add in copious amount of AA, HDR and/or AF, or one can choose to utilize more of the massive shader power.

The 360 on the other hand is designed around MS' idea that all their games need to have 4xAA at 720p, so thus they included less shader power than the PS3 in order to include the eDRAM. They also provided less bandwidth, but thats due to the eDRAM taking care of the backbuffer.

Sony says use our hardware however you see fit, MS says use it how we tell you. Simple as that I think.

Quite honestly I believe the real reason that MS wanted and EDRAM design was because XGPU's biggest weakness was transparent fillrate.

Bear in mind that a graphics card get's nothing like peak bandwidth to memory when rendering a real scene, between pagebreaks and other inefficiencies, you'll likely see less than half the peak bandwidth in a real situation. Transparencies are even worse because of the swaps from read to write.

You can bring an NV2A to it's knees with one badly thought out particle system, it's about the only time you'll ever see an Xbox outperformed by a PS2.

I think the natural reaction to this is to try and address the sort comings of your previous design, hence you take bandwidth out of the equation as a limiting factor, which is what Xenos does.

My personal belief is the requirement for 720P came way after most of the early design decisions (although it was likely always a consideration). I think the original intentions was to use the built in scaler (which is supposed to be extremly impressive) to provide HD support from widescreen SD images, but MS were worried about Sony's marketing dept bashing them on the HD bullet point and you have the 720P requirement.
 
Quite honestly I believe the real reason that MS wanted and EDRAM design was because XGPU's biggest weakness was transparent fillrate.

do you think this weakness will be evident on the Nvidia chip in PS3?
 
Status
Not open for further replies.
Back
Top