MS big screw up: EDRAM

Status
Not open for further replies.
Nobody has convinced me.

If bandwidth is not a problem at 720P then WHY have it?

Who would want a 16 pipe RSX with EDRAM?

It's a trading of finite resources.

For all the people wishing for EDRAM in PS3, again, would you give up 1/3 the power for it?

Because that's what it costs.
 
Bill said:
Nobody has convinced me.

If bandwidth is not a problem at 720P then WHY have it?.

Bandwidth is a problem if you do a lot of post-processing effects and more if you share it with the cpu.
 
Bill said:
Nobody has convinced me.

It doesn't seem you want to be convinced, which makes me wonder what the real reason of opening this thread was, and especially why you're keeping this up.

No one is going to convince you, whatever we say, and i think we've said enough.
 
Bill you're making legitimate points, and it's all being discussed. Why must you be so militant about it all? I mean, you just seem downright angry about this eDRAM thing. It may or may not result in *your* ideal scenario, but it was implemented for a reason - several of which have been pointed out.
 
"Bandwidth is a problem if you do a lot of post-processing effects and more if you share it with the cpu."

True. But they could have done two 128 bit busses like PS3.

I understand, 256 bit seemed too expensive.

Not trying to be militant.

Here's the simple points an EDRAM defender needs to prove, that not having it would have otherwise crippled the system. Or show bandwidth limited benchmarks at around 720P.

Otherwise, it does nothing.

Let me put it another way, if X360 is not putting out better graphics than PS3, the EDRAM failed, plain and simple.

After all, MS paid for more transistors in the GPU than Sony did.
 
Last edited by a moderator:
Bill said:
"Bandwidth is a problem if you do a lot of post-processing effects and more if you share it with the cpu."

True. But they could have done two 128 bit busses like PS3..

It's more efficient to use EDRAM.
 
True. But they could have done two 128 bit busses like PS3.
Where do you see 2 128-bit busses in PS3? If you're referring to the XDR main memory bus and the GDDR VRAM bus, the former is 2 32-bit busses (quite possibly synchronous or implemented as if it's a single 64-bit) and the latter is 1 128-bit.

I understand, 256 bit seemed too expensive.
a single 256-bit bus is cheaper than 2 independent asynchronous or synchronous 128-bit busses. Mainly because a lot more leads and pins can be shared, and that saves you quite a bit on the motherboard and a fair bit on the actual core logic, and the electricals are a bit more reliable.

Or show bandwidth limited benchmarks at around 720P.
Pile on the alpha blending and render-to-texture processes it won't be that hard to show bandwidth limitations even at 480. Where I'm lost is your claim that it's invariably a detriment (i.e. "PLAIN AND SIMPLE"). It's actually quite easy to hit the bandwidth or fillrate limitations of just about anything. I'd agree with you that eDRAM doesn't really buy you anything up to a point, and depending on the implementation, that point could be higher or lower, and the relative gain could be small or large... but you're making it sound like being at a resolution like 720p puts that bar out of reach, which isn't really true at all... unless the GPU hardware itself can't even process enough tris, in which case, eDRAM or not isn't really your problem.
 
Bill said:
Let me put it another way, if X360 is not putting out better graphics than PS3, the EDRAM failed, plain and simple.
Hadn't you better wait until both games are out so you can actually compare results? For all you know XB360 will be putting out better visuals. And even if not, if PS3 is using Cell resources to match XB360, the Xenos is a better GPU tahn RSX. And even then if not, if PS3 has the graphical edge, MS were aiming for a cheaper box anyway.
 
ShootMyMonkey said:
Pile on the alpha blending and render-to-texture processes it won't be that hard to show bandwidth limitations even at 480. Where I'm lost is your claim that it's invariably a detriment (i.e. "PLAIN AND SIMPLE").

I think your last comment was particularly on point. We can create pathological cases that prove either case; but at the core of his argument is a valid point concerning the point that optimizes greatest preformance on a per-IC, per-subsystem, per system level.

And you're totally correct in that he shouldn't be so definitive in his statement; It's up for debate if the preformance bounding functions will, on average, be those which are better helped by a somewhat odd and costly MCM configuration with what is, by extrapolated standards, a small amount of eDRAM; or by dedicating roughly the same aggregate area to logic and computation. It's definitly an interesting question.
 
"And you're totally correct in that he shouldn't be so definitive in his statement; It's up for debate if the preformance bounding functions will, on average, be those which are better helped by a somewhat odd and costly MCM configuration with what is, by extrapolated standards, a small amount of eDRAM; or by dedicating roughly the same aggregate area to logic and computation. It's definitly an interesting question."

And early returns are PS3 and it's 300 million logic transistors are outperforming X360 and it's 257 (really more like 242, not counting AA stuff) logic+80 EDRAM.

"And even if not, if PS3 is using Cell resources to match XB360"

Cell is not a GPU. It would be not very helpful, except for some vertex processing.

X360 was not designed to be cheap. It uses 500 million transistors. It uses MORE transistors on the GPU. And less on the CPU.

Try this: X360 165 CPU 337 GPU=500

PS3 250 CPU 300 GPU=550

X360 logic=242+165= 407

PS3 logic =550

Difference between 550 and 500=negligible.

550 vs 400=big gap.
 
Last edited by a moderator:
Bill said:
And early returns are PS3 and it's 300 million logic transistors are outperforming X360 and it's 257 (really more like 242, not counting AA stuff) logic+80 EDRAM.
Did you ever stop to think that the Xenos will have a steeper learning curve than RSX? That engine's are going to have to be built with tiling in mind for the EDRAM to be most effective?

If you understand those points it makes sense that EARLY ON the RSX may appear to be outperforming Xenos.

Xenos is not being utilized as it was designed since dev's have only had it for 4 months! Most games have been in development for over 16 months.

Lets wait until 2006/2007 to see who's outperforming who shall we?
 
Last edited by a moderator:
scooby_dooby said:
Did you ever stop to think that the Xenos will have a steeper learning curve than RSX? That engine's are going to have to be built with tiling in mind for the EDRAM to be most effective?

If you understand those points it makes sense that EARLY ON the RSX may appear to be outperforming Xenos.

Xenos is not being utilized as it was designed since dev's have only had it for 4 months! Most games have been in development for over 16 months.

Lets wait until 2006/2007 to see who's outperforming who shall we?

Scooby,

Just like he is doing, you are assuming that there is only one way you can measure the performance of Xenos and the RSX. Clearly these two parts were desgined with different goals in mind. There will be situations in which one will clearly be better than the other, and other situations where the the reverse is true.

The interesting questions are:

What are each company's design goals? Are they the correct goals? Which company's product (advertently or inadvertently) better meets the correct goals?

Before we can know which part is "better", we need to decide what the correct goals are.

Nite_Hawk
 
I would maintain the goal for a GPU is simple: put out the best possible graphics per transistor budget.
 
Nite_Hawk said:
Just like he is doing, you are assuming that there is only one way you can measure the performance of Xenos and the RSX. Clearly these two parts were desgined with different goals in mind. There will be situations in which one will clearly be better than the other, and other situations where the the reverse is true.
I'm not. I'm saying Xenos contains alot of new technology that is going to take a while to get used to, and it's going to take a while for game engines to really support these features.

One has to be realistic and understand developers have only had this GPU for 4 months, you have to realize that most games being released, and most game engines that are in existance, were started in 2004 or earlier, and that they can't possibly be making the best usage of these features.

With that understanding, it's clear that what we are seeing is NOT Xenos at it's best, or even close, and that you shouldn't judge the capabilities of the GPU from sub-par launch games that were developed on X800's.

Of course, that's just common sense.
 
Status
Not open for further replies.
Back
Top