"4xAA": Microsofts big mistake.

Status
Not open for further replies.
We are talking about available bandwidths not size of caches. Think about this...the current redundant 8th SPE could consume more bandwidth if activated than what RSX is demanding on the EIB ring bus...
banwidth doesn't matter if its connected to ram that is full . If there is nothing on the other side of hte bandwidth to use then there is no reason to quote it .

Your saying we should add it cuase its there . I'm saying its bs because what your talking about will never be used . The small amount of cache would be used for the spes as the spes will not be very effective if at all with no cache .

No you don't understand. If you did, you wouldn't have asked what the X360's hard disk has got anything to do with what we're discussing...

your talking about avalible badnwdith . The bandwidth to the hardrive is just as usable as the bandwidth to the spe's cache .

We're discussing RAM bandwidths and data flows connected to CPU/GPU on die memory controllers...not hard disk virtual memory which doesn't alter these available bandwidths.
If that is true , i do not see the 256GB/s bandwidth on the daughter die of the xenos . If you were interested in that you would include it because its just as usefull going by what your claiming as using the cache in the ring bus of the cell .

That RSX 57 GB/sec read|write is as real as Xenos 54 GB/sec read|write (22.4 system + 32 to EDRAM module). However, in X360's case, the XeCPU under those conditions cannot access system RAM but CELL can still access system RAM (XDR).

going by what your saying xenos would have more than 54gb/s . It would have 22.4 system + 32edram + xenos to the cpu since it too can acess its cpu's cache
 
Just to be clear, are those [H] slides (which it seems no one's properly credited by linking) RSX or G70 numbers?

Just to be thick, is it common knowledge that Cell and RSX will share both sets of memory without penalty, or should I assume that each will play mainly with its own kind of memory?

As for the slides, I'm not sure who's supposed to be impressed by them. To mix up 10x7 with 720p is a relatively minor mistake, but to fumble multi- and super-sampling WRT a GPU with a mere 128-bit memory bus is just, well, blockheaded.
 
Just to be clear, are those [H] slides (which it seems no one's properly credited by linking) RSX or G70 numbers?
g70

Just to be thick, is it common knowledge that Cell and RSX will share both sets of memory without penalty, or should I assume that each will play mainly with its own kind of memory?
only latancy will increase . It shouldn't affect the rsx much as gpus just want bandwidth and lots of it . Though the cell may take a hit acessing the gdr ram so that may only play really in its own ram
 
I don't get how G70 numbers relate to RSX numbers. MSAA requires bandwidth, but can access to the required data be split b/w the XDR and GDR RAM to give RSX bandwidth comparable to G70? I'm looking at it in terms of having ~20GB/s, not 40GB/s.

On a related note, I don't follow the logic in this quote from Anand's and Derek's 360 vs. PS3 article:

At 720p, the G70 is entirely CPU bound in just about every game we’ve tested, so the RSX should have no problems running at 720p with 4X AA enabled, just like the 360’s Xenos GPU. At 1080p, the G70 is still CPU bound in a number of situations, so it is quite possible for RSX to actually run just fine at 1080p which should provide for some excellent image quality.
I'm not sure how the G70 being CPU bound relates to RSX offering AA without a noticable performance hit, seeing as a) MSAA is bandwidth-, not CPU-limited, and b) RSX should have less bandwidth than G70? And do they think RSX will be CPU-bound in next-gen console titles? I don't really see UE2007 on RSX at 1920x1080 being held back by the CPU, given that Tim Sweeney said the latest demo was running at 1280x1024 on a 7800 GTX (at what I assume are comparable framerates as the 6800U at 640x480; also, 1920 = 1.5x 1280, but 550 = 1.3x 430MHz).
 
jvd said:
Just to be clear, are those [H] slides (which it seems no one's properly credited by linking) RSX or G70 numbers?
g70

Just to be thick, is it common knowledge that Cell and RSX will share both sets of memory without penalty, or should I assume that each will play mainly with its own kind of memory?
only latancy will increase . It shouldn't affect the rsx much as gpus just want bandwidth and lots of it . Though the cell may take a hit acessing the gdr ram so that may only play really in its own ram

It depends how much latency the GPU can hide....
I'd expect it to be engineered to hide latency from it's local memory, and I wouldn't necessarilly expect it to tolerate the additional latency from the XDR memory without some degradation in performance. But at this point it's all speculation.
 
I also disagree with the original post. The anti-alisaing marketing that MS would provide would do little to actually affect the outcome of Xbox 360. The average person, and gamer, could care less about the performance of AA. They see a difference with graphics, and if Nvidia doesn't make its 4X as good as ATI's 4X then that will make more of a difference than some cheap marketing gimmick.
 
jvd said:
We are talking about available bandwidths not size of caches. Think about this...the current redundant 8th SPE could consume more bandwidth if activated than what RSX is demanding on the EIB ring bus...
banwidth doesn't matter if its connected to ram that is full . If there is nothing on the other side of hte bandwidth to use then there is no reason to quote it .

HUH?

What are you talking about?

jvd said:
Your saying we should add it cuase its there . I'm saying its bs because what your talking about will never be used . The small amount of cache would be used for the spes as the spes will not be very effective if at all with no cache .

No.

I've already explained this in my earlier post. I suggest you take time to digest and absorb the info already posted because now you're wasting my time. If 7 SPE's can work then 8 SPE's can work on the EIB. I've already explained to you earlier that RSX would be like the 8th SPE. So re-read/ understand earlier posts or perhaps some articles on CELL/ FlexIO etc...

jvd said:
No you don't understand. If you did, you wouldn't have asked what the X360's hard disk has got anything to do with what we're discussing...

your talking about avalible badnwdith . The bandwidth to the hardrive is just as usable as the bandwidth to the spe's cache .

The hard disk is on the SOUTH BRIDGE.

The SOUTH BRIDGE has B/W to the NORTH BRIDGE.

The SOUTH BRIDGE B/W have already been accounted for X360 AND PS3 in my earlier posts.

jvd said:
We're discussing RAM bandwidths and data flows connected to CPU/GPU on die memory controllers...not hard disk virtual memory which doesn't alter these available bandwidths.
If that is true , i do not see the 256GB/s bandwidth on the daughter die of the xenos . If you were interested in that you would include it because its just as usefull going by what your claiming as using the cache in the ring bus of the cell .


I suggest you re-read/understand what's already been posted...<broken record>

I've included them as INTERNAL B/W in the EDRAM module already. The same as the EIB B/W in CELL.


jvd said:
That RSX 57 GB/sec read|write is as real as Xenos 54 GB/sec read|write (22.4 system + 32 to EDRAM module). However, in X360's case, the XeCPU under those conditions cannot access system RAM but CELL can still access system RAM (XDR).

going by what your saying xenos would have more than 54gb/s . It would have 22.4 system + 32edram + xenos to the cpu since it too can acess its cpu's cache

Yep. I don't think Dave's article mentions this but the 'leak' has certain caveats for read/writes from cache and system RAM to the north bridge on Xenos. I think it's,

(10.8 cache + 22.4 system) read + 32 read/write EDRAM module ~ 65.2 GB/sec

OR

22.4 system write + 32 read/write EDRAM module ~ 54.4 GB/sec

However, XeCPU would not be able to access system ram still under those conditions.

Think of XeCPU + Xenos Shader module + Xenos EDRAM module as one big die. This die has external system B/W to GDDR3, 22.4 GB/sec. The same with CELL + RSX with external system B/W with XDR and GDDR3 ~ 48 GB/sec.

Which nicely brings me to the link below. Read what I posted first in this thread here...

http://www.beyond3d.com/forum/viewtopic.php?p=560015#560015

Just think of all the dies as acting as one with external bandwidths with certain caveats when making comparisons with these bandwidths between PC's, X360 and PS3...
 
I've tabled the data from HW.fr's GTX review:

Code:
      12x10   " 4x8     % Hit        19x12   " 4x8     % Hit
HL2     149     122     18.12          116      76     34.48
D3*     172      96     44.19          116      61     47.41
FC       99      81     18.18           89      48     46.07
SC:CT    98      66     32.65           62      40     35.48
PF       73      52     28.77           56      34     39.29
AoW     119      93     21.85           84      53     36.9
CMR05   159      87     45.28           85      46     45.88

* 8x AF on all benches.
Not as rosy a picture when you consider only more modern games. Although both the resolutions are slightly higher than the 720p and 1080p targets, GTX's "real-world" available bandwidth is likely higher than RSX's (albeit probably not proportionally).

BTW, just to keep things half educational (rather than all skeptical), can I deduce a similar limiting factor from the identical AA+AF performance hits at both resolutions (D3, SC, CMR)? Or would I need to look at more data to conclude it's a vertex or ROP limitation? (Obviously it's not a CPU limitation, as all three games show far higher performance at the lower res. I'm also guessing it's not a bandwidth limitation for the same reason, though maybe I'm not distinguishing between fetches per clock and bandwidth per clock.)

Edit: Bah, HTML formatting works when I preview, but not when I post, and code doesn't play nicely with tabs.
 
Jaws said:
I've already explained above the bandwidth of RSX is to FlexIO + GDDR3 ~ 57 GB/sec. The EIB ring bus within CELL can pass data to RSX...e.g. procedurally generate data without flooding XDR as an example...

The EIB ring bus has 96 bytes/cycle bandwidth. If it's clocked at half, 1.6 Ghz, that's ~ 153 GB/sec or 306 GB/sec at full speed B/W. The PPE L2 cache and 7 SPE's are on that EIB ring bus. Just because RSX accesses FlexIO, doesn't mean that XDR is accesed. The data to/from the EIB is passing to/from FlexIO and can bypass XDR...


At this point you are effectively arguing semantics, as far as this thread is concerned.

It's memory bandwidth, and its application, that is being discussed here, i.e. Antialiasing. How is the bandwidth of the EIB relevant here? What are you going to do, write the frame buffer to one of the SPE's local storage? :rolleyes:
 
Jaws said:
That RSX 57 GB/sec read|write is as real as Xenos 54 GB/sec read|write (22.4 system + 32 to EDRAM module). However, in X360's case, the XeCPU under those conditions cannot access system RAM but CELL can still access system RAM (XDR).

That is totally silly.

How does half this apply to AA? If you are going to count the Read+Write over the FlexIO even though it exceeds the MEMORY bandwidth of the entire system--THUS has no relevance to the AA issue. If total memory bandwidth is 47GB/s on the PS3, and AA is a bandwidth limited task, I see no reason why we are discussing 57GB/s of RSX bandwidth. That is just playing numbers, which of course goes both ways. So lets message the numbers some:

22GB/s UMA + 22GB/s [11GB/s upstream / 11GB/s downstream] Xenon-to-Xenos L2 access + 32GB/s GPU Parent Die-to-GPU Daughter Die + 256GB/s GPU Daughter logic-to-eDRAM = 332GB/s

Wow, 332GB/s > 57GB/s :rolleyes:

Oh, but wait, not all of this is relevant to the question on hand! Both scenarios are falacious. RSX may have a total of 57GB/s read/write bandwidth, but it surely does not have 57GB/s of memory bandwidth because the PS3 only has ~48GB/s of total system memory bandwidth. The PS3 has 10GB less memory bandwidth than the RSX total bandwidth because it is not all to memory. So that number is totally irrelevant for the discussion of AA.

Further, the math about expected bandwidth hits does not line up with reality. While it is nice to call them "maximum" usage for HDR and AA and the like, experience is telling us that this is not the case. Theoretically mid level cards like the 6600GT should be able to do nice amounts of AA without having bandwidth issues. Yet they do. even the NV40 is bandwidth limited at times, as is the G70. How can a GPU with almost 40GB/s be bandwidth limited based on those numbers?

Maybe because theoretical maximums performance usage does not lineup correctly with the GPU's memory effeciency and how bandwidth is consumed in real world scenarios? ATI has quoted numbers much higher for expected bandwidth usage. I cannot find the link, but I remember them giving a range of something like 26GB/s-134GB/s. My numbers could be foggy, but the numbers were quite large.

Back to the original points:

1. These are not even Xenos vs. RSX slides. Neither is ever mentioned.

2. Comparing DX7 and DX8 style games, as many of the games in the slides are, as an estimate of next gen usage is completely irrelevant. One glance at the high geometry Sony Render Targets from E3 clearly demonstrates a huge gap in complexity.

3. The 720p slide contains a lot of CPU limited games, therefore it is impossible to compare the performance hit for AA at this resolution.

4. Modern games like Far Cry and Doom 3 take ~40% hit at 1600x1200 (which is ~9% less pixels than 1080p). 40% is indeed substantial and pales in comparison to the 1-5% number ATI has been quoted for 720p 4xMSAA. Further, it is obviouse Sony is aiming much higher than either game for next gen. Simple, the G70 is not giving free AA, or even a minimal performance hit on modern games. This could be more pronounced on games with much higher levels of geometry.

5. The G70 most likely has more memory bandwidth in relation to the expected available bandwidth for the RSX in the PS3 in real world scenarios. If the RSX used all the bandwidth available to it (22GB/s to the GDDR3, 15+20GB/s to the XDR, which totally maxes out the 25GB/s of XDR memory bandwidth) that would not only leave the CELL CPU memory starved--it would leave it completely IDLE. This would defeat the purpose of course and is unrealistic to say the least.

To quickly compare, if the RSX is allocated the same 38GB/s of bandwidth the G70 has, that would leave the PS3 CELL with 10GB/s of main memory bandwidth. CELL is a very memory dependant design. No one here can say whether 10GB/s is enough or not, but it would be fair to say this could be a significant hurdle to getting 218GFLOPs out of the CELL.

Best case scenario the RSX could end up having comparable available bandwidth to the G70. Yet the RSX is looking to be a 22% faster core (550MHz vs. 430MHz).

6. G70 cannot do HDR and MSAA at the same time. The reference of SSAA (which has a much larger performance hit over MSAA) with HDR is misleading. The benchmarks are showing that you need 2 G70's in SLI to get HDR + SSAA in modern games. Again, these are PR slides and nothing more.

7. The PS3 is a closed box. Miracles occur in these wonderful closed boxes. i.e. We can expect technically savy developers over the next 5 years to find ways to maximize the potential of the system, whereas even nasty DX7 games are going to be more ineffecient than the wonders that appear on the PS3.

8. ATI are not idots. There is a reason they gave up 105M transistors for eDRAM. Ditto Sony with the PS2 and Nintendo with the GCN.

People really need to stop playing down the eDRAM. Comparing the bandwidth to CELL, when it is over and beyond that of the total system memory bandwidth, is silly.

Theoretical numbers aside we already know modern games take a performance hit with AA enabled on G70. Whether that is a bandwidth issue, fillrate, etc... does not matter. If RSX is an implimentation of RSX technology, as is expected, then RSX is going to take a hit in modern games with 4xAA enabled at 1080p. A large hit at that.

EDIT:
ultimate_end said:
It's memory bandwidth, and its application, that is being discussed here, i.e. Antialiasing. How is the bandwidth of the EIB relevant here? What are you going to do, write the frame buffer to one of the SPE's local storage?

Heh, you beat me ;)

It is a falacious comparison. Basically RSX's total bandwidth is irrelevant when discussing AA and total system bandwidth when total system bandwidth is less than total RSX bandwidth (obviously because the RSX can be fed information from CELL, so a higher bandwidth need is understandable).

The eDRAM gets under the skin of some. Anyone who follows the forums can see that.

I am still chuckling over how this is some type of Sony PR win--even though Xenos and RSX are not mentioned--yet Major Nelson was not because we could divide the facts from the sweet talk. For some reason that same principle is not applying here :LOL:
 
Pete said:
I've tabled the data from HW.fr's GTX review:

Code:
      12x10   " 4x8     % Hit        19x12   " 4x8     % Hit
HL2     149     122     18.12          116      76     34.48
D3*     172      96     44.19          116      61     47.41
FC       99      81     18.18           89      48     46.07
SC:CT    98      66     32.65           62      40     35.48
PF       73      52     28.77           56      34     39.29
AoW     119      93     21.85           84      53     36.9
CMR05   159      87     45.28           85      46     45.88

* 8x AF on all benches.
Not as rosy a picture when you consider only more modern games. Although both the resolutions are slightly higher than the 720p and 1080p targets, GTX's "real-world" available bandwidth is likely higher than RSX's (albeit probably not proportionally).


Nice table, but IMHO this shows not even half of the truth, because most PC-Games are using only a low amount of Polygons per second.

Arent the new consoles ment to render at least > 100Mio Polygones per second?

So how good will the Z-Buffer and Framebuffer compression work when the Polygon to Pixel ratio approaches 1?

IMHO not very good. With MSAA a little bit better, but without MSAA rather bad.

So IMHO the effective Bandwidth of the RSX will be much smaller than the effective bandwidth of the G70.

IMHO ATi has not included bandwidth compression and used eDRAM instead for a reason.

IMO the simple truth will be that the RSX will not be able to play games with AA and HDR enabled when rendering a lot of polygons on screen.
 
ultimate_end said:
How is the bandwidth of the EIB relevant here? What are you going to do, write the frame buffer to one of the SPE's local storage? :rolleyes:

At this point we might as well go all the way and include xenon's L1 caches which probably amount to hundreds of GB/s all by themselves, plus the L2 cache which is whatever GB/s it might be. Yay for pointless spec wars! :devilish:
 
@ Jaws : Though I think everyone agrees with your BW calculation, in the context of discussion (is RSX BW limited hence unable to do AA?) I don't see how RSX<>Cell bandwidth helps.

Xenos's daughter-die BW (into and internal) helps provide for AA. RSX<>Cell surely won't because the available storage in Cell isn't 1) large enough for anything useful for AA (?) and 2) Is needed for the SPE's to do something useful.

To all extents and purposes RSX is limited to 47 GB/s available BW for rendering. And from this some is taken up by Cell, so maybe 30 GB/s for rendering. Whereas Xenos has 10 GB/s from UMA, + the bandwidth saving features of the daughter die (which are difficult to quantify!)...say 10+32 = 42 GB/s (+ bonus figure for BW saved from daughter-die processing). Xenos has it's own localised BW to local storage that isn't taken up by CPU.

Though Cell can be used to pass data directly to RSX (hence allowing that exta BW) how does this help with AA, especially in real-world situations?
 
I agree with your post Acert93. I think you bring it in perspective.

I really like this forum for although there may be disagreements on certain areas at least it civilized and somewhat constructive when you take everything (opinions and links provided even :) )in and break it down to help bring you to your own conclusions. (unlike some other sites :rolleyes: )

In the end I think that no matter which console, which preferred method each of the companys took, (design wise) I dont think we will be disappointed especially once the developers get a handle on bringing out the power of the systems. Im just looking forward to what the Xbox 360 and PS3 has in store for us.
 
Acert93 said:
It is a falacious comparison. Basically RSX's total bandwidth is irrelevant when discussing AA and total system bandwidth when total system bandwidth is less than total RSX bandwidth (obviously because the RSX can be fed information from CELL, so a higher bandwidth need is understandable).

I think Jaws just got a little upset at the notion of RSX's total bandwidth being "BS" bandwidth. Yeah it's nice that there is a very good interconnect between Cell and RSX, but as has been mentioned, it has little relevance to the current topic.

Acert93 said:
I am still chuckling over how this is some type of Sony PR win--even though Xenos and RSX are not mentioned--yet Major Nelson was not because we could divide the facts from the sweet talk. For some reason that same principle is not applying here :LOL:

I guess the point is that Microsoft has been a little weak in marketing the virtues of Xenos's "smart memory" (which is probaby moot considering that the consoles aren't out yet). But then there are those slides. I don't think that nVidia needs to explicitly mention RSX or Xenos on those slides, the connection will be made automatically, maybe not for those of us on this board, but certainly for people like game journalists for example (who are all that matters when it comes to the broader audience at large). I think the same can be said about the Major Nelson "analysis". I imagine that this is what is concerning to some people.

BTW, the title of this topic should be changed. "4xAA. Microsoft's big mistake." really does sound like Brimstone is claiming that Microsoft is wrong to have antialiasing in the X360! :LOL: That's what I'm chuckling about, no wonder some people have been confused.
 
Fafalada said:
What are you going to do, write the frame buffer to one of the SPE's local storage?
Why not, if I can find a use for doing so? 8)

Well it wouldn't be impossible... :D

Now Faf, don't overdo it OK? We don't want you having a heart attack trying to build the ultimate OMGWTFBBQ PS3 engine :p.
 
ultimate_end said:
It's memory bandwidth, and its application, that is being discussed here, i.e. Antialiasing.

Backbuffer B/W is directly relevant to anti-aliasing. It has a significant impact on the PC and PS3 but negligible impact on X360 which doesn't need to worry about it because of it's EDRAM module.

ultimate_end said:
How is the bandwidth of the EIB relevant here?

The EIB is relevant here. Simple data flow,

XDR 25.6 GB/sec Read/write ---> [CELL computes stuff] --- > FlexIO 35GB/sec read/write

The EIB B/W is much greater than XDR or FlexIO.

The point here is simple. The RSX would act like an extension on the EIB ring bus as I've already explained earlier. The EIB has a high bandwidth so that it can sustain the RSX + 7 SPUs. CELL can still continue to access XDR and pass data to RSX or to 7 SPUs. Therefore RSX net B/W is STILL 57 GB/sec AND CELL can still access XDR at 25.6 GB/sec. JVD was disputing this by saying the 57 GB/sec is BS.

jvd said:
Thats if the cell cpu is not used for anything. Which i think we all know is bs .

It's not BS, hence the subsequent discuusion.

People keep confusing the RSX peak read|write B/W of 57 GB/sec as B/W to XDR. It's NOT. It's B/W to FlexIO + GDDR AND CELL can still access XDR. This I've mentioned several times in this thread and it becomes frustrating when people keep ignoring/not understanding what's already been posted. EIB below...

cell-outline.jpg


http://www.research.ibm.com/cell/

ultimate_end said:
What are you going to do, write the frame buffer to one of the SPE's local storage? :rolleyes:

Read above. You missing my point.

Shifty Geezer said:
@ Jaws : Though I think everyone agrees with your BW calculation, in the context of discussion (is RSX BW limited hence unable to do AA?) I don't see how RSX<>Cell bandwidth helps.

Read my above replies.

Shifty Geezer said:
Xenos's daughter-die BW (into and internal) helps provide for AA. RSX<>Cell surely won't because the available storage in Cell isn't 1) large enough for anything useful for AA (?) and 2) Is needed for the SPE's to do something useful.

See reply above as your missing my point.

Shifty Geezer said:
To all extents and purposes RSX is limited to 47 GB/s available BW for rendering. And from this some is taken up by Cell, so maybe 30 GB/s for rendering. Whereas Xenos has 10 GB/s from UMA, + the bandwidth saving features of the daughter die (which are difficult to quantify!)...say 10+32 = 42 GB/s (+ bonus figure for BW saved from daughter-die processing). Xenos has it's own localised BW to local storage that isn't taken up by CPU.

Basically X360 doesn't have contention to system RAM for backbuffer B/W but PC and PS3 do. Which is what your trying to say, if I'm not mistaken, and which is what I've already posted in my first post in this thread.

Shifty Geezer said:
Though Cell can be used to pass data directly to RSX (hence allowing that exta BW) how does this help with AA, especially in real-world situations?

It's where you store the backbuffer. If the B/W costs are too high, you need B/W from elsewhere. How you efficiently manage B/W is upto the programmer. By optimising B/W elsewhere, you're indirectly helping AA.

Acert93 said:
Jaws said:
That RSX 57 GB/sec read|write is as real as Xenos 54 GB/sec read|write (22.4 system + 32 to EDRAM module). However, in X360's case, the XeCPU under those conditions cannot access system RAM but CELL can still access system RAM (XDR).

That is totally silly

SEE SIG. YOU KNOW WHY. DON'T MAKE THIS ARRANGEMENT MORE DIFFICULT. :rolleyes:
 
Shifty Geezer said:
To all extents and purposes RSX is limited to 47 GB/s available BW for rendering. And from this some is taken up by Cell, so maybe 30 GB/s for rendering. Whereas Xenos has 10 GB/s from UMA, + the bandwidth saving features of the daughter die (which are difficult to quantify!)...say 10+32 = 42 GB/s (+ bonus figure for BW saved from daughter-die processing). Xenos has it's own localised BW to local storage that isn't taken up by CPU.

Yes, this is the problem with trying to make comparisons between consoles based on one or two simplified numbers. Only programmers working with the specific hardware can know such things as how much extra latency there is in accessing XDR from RSX or how good the caches and other technologies in RSX are at alleviating this. There is so much about these consoles we don't know and can't quantify, as you say. Even then, due to the vast complexity and differing overall system architectures of the two consoles, I'd imagine it to be quite a while before accurate comparisons can be made. Perhaps we should all stop trying to compare :LOL: :LOL:

I prefer to forget about the XDR being accessible by RSX because I like to think that one day, Cell will be utilised well enough that it would need all the XDR bandwidth and capacity for itself. Some devs are going to try all sorts of schemes to use Cell and its XDR RAM to help take some of the burden off the GDDR3, but I don't think I'm wrong to say that this won't be as efficient as if RSX wasn't as strangled for VRAM bandwidth as it is. I already fear that most PS3 games will actually not have antialiasing, except for an effective supersampling AA brought about by internal resolutions being scaled down to 480i/480p etc (and then hoping that 1080i/1080p by itself is "good enough" to alleviate jaggies if you have an HDTV). Maybe it will work, maybe it won't, but it's something of a waste of MSAA capabilities. Which is a shame, Mr PS2 would have killed to get hardware FSAA!

I have to give kudos to Microsoft. This time around they seem to have put together a nice, relatively well balanced, gaming console. They will at last be able to shake off the "PC in a box" stigma that surrounded the original Xbox.

But anyway, basically, at the end of the day we will just have to wait and see what sort of games these consoles can bring (and hope that all the PR and marketing BS doesn't affect things too much). As has already been mentioned in this thread, eventually none of this will matter and all we will be left with is the games. Then people will be able to make their own minds up.
 
ultimate_end said:
...
I think Jaws just got a little upset at the notion of RSX's total bandwidth being "BS" bandwidth. Yeah it's nice that there is a very good interconnect between Cell and RSX, but as has been mentioned, it has little relevance to the current topic.

I suggest you read my replies above because I don't think you understand that AA is a function of system bandwidth and it's relevance to this topic.
 
Status
Not open for further replies.
Back
Top