One dev's view: X2 a disappointment

Status
Not open for further replies.
For the tenth time, we don't need to fit a full back buffer in edram.
We can split a frame in 2 or 4 buffers and render a sub-frame at time.
There are few algorithms that may require a full knowledge of the back buffer such as some kind of tone mapping, but it should be still possibile to multipass that while gathering information on each subframe and then make a final tone mapping pass.
 
one said:
Jaws said:
Why? What difference would that make? I've stated my view in the original thread above and this thread too. But I'm kinda cheesed off at all this knee-jerk BS remarks so I'll eleborate...
Well, I should have been more specific too, are those points based only on his own educated speculation, or based on the info the guy was given from SCEA or others involved?
...

Those points were info from SCEA visit around September 2004 and not speculation. And if my word isn't good enough, then it never will be...
 
nAo said:
For the tenth time, we don't need to fit a full back buffer in edram.
We can split a frame in 2 or 4 buffers and render a sub-frame at time.

Yes, if you want to write your own tile-based or SFR algorithms. How many devs want to go through the trouble?

The PS2 has absolutely amazing bandwidth numbers for its eDRAM, but that didn't give it any real advantage over non-edram systems in rendering performance or quality. In fact, image quality wise, it gets its ass handed to it by the XBox IMHO. The pain of using limited amounts of eDRAM and all of that swapping in and out of eDRAM slashes the hyped performance to earthbound levels. Likewise, the GC's SRAM was supposed to mean great things, but it ultimately didn't deliver a beating on the NV2A.

Now on the XBOX2 if I want to render a 720p or 1080i scene/texture in HDR with 4x samples, I have to resort to a contorted split frame technique.

I'm sorry, but its still disappointing. 32mb of eDRAM is probably the least disappointing figure I could hear. I'm a heavy HDTV user. GT4 looks *atrocious* on my system. I have an HTPC on the same system, and no 720p+FSAA support = major disappointment. I want next-gen games to have 720p and FSAA *always on*, like 480p is supported on almost every game on the Xbox1.
 
DemoCoder said:
Yes, if you want to write your own tile-based or SFR algorithms. How many devs want to go through the trouble?
What if... they didn't? I mean, MS is, on-the-whole, sensitive to developers' plights. Do you really think that if the whole framebuffer does not fit into eDRAM, MS just says "Tough, not our problem that nearly every game runs into the same problem"?

Not like I know any of what's going on, but think about it for a second. If the case is as common as you claim (and I think you're right about that), it would not be improbable that MS has *something* to help. Whether it be template code or new/different API calls or the hardware works differently. I don't know, but if it is a problem, I bet MS already has help.
 
Well, MS could offer frame-capture or a scenegraph API and do the workload split underneath, but those have failed in the past for various reasons. Remember 3dfx TBR drivers? Remember retained-mode DX?
 
DemoCoder said:
Yes, if you want to write your own tile-based or SFR algorithms. How many devs want to go through the trouble?
Splitting a viewport in 4 sub-viewports is not what I'd call a big trouble,
once you garantee (and it's easy..) your non opaque objects rendering order is the same in each viewport.
Maybe is not the most efficient thing to do (especially for procedural geometries..) , but it doesn't seem, at least to me, a so big burden!

The PS2 has absolutely amazing bandwidth numbers for its eDRAM, but that didn't give it any real advantage over non-edram systems in rendering performance or quality.
Maybe you should reconsider that. Blazing fast full screen ops or free alpha blending are real advantages. Unfurtunately GS is lacking a lot of neat stuff..

In fact, image quality wise, it gets its ass handed to it by the XBox IMHO.
I agree, but the problem of PS2 image quality has almost nothing to do with its limited edram.
The pain of using limited amounts of eDRAM and all of that swapping in and out of eDRAM slashes the hyped performance to earthbound levels.
It's not real a pain, every halfdecent PS2 developer can confirm that.

I'm sorry, but its still disappointing. 32mb of eDRAM is probably the least disappointing figure I could hear. I'm a heavy HDTV user. GT4 looks *atrocious* on my system. I have an HTPC on the same system, and no 720p+FSAA support = major disappointment. I want next-gen games to have 720p and FSAA *always on*, like 480p is supported on almost every game on the Xbox1.
Maybe it's disappointing (but we don't know the final figure, maybe we are going to get more than 10.5 MBytes) but we can live with it. I agree that more is better and that 32 mb edram would be a sweet spot for this generation, but once you developed on a platform that don't even clips triangles..doing multiviewports rendering is piece of cake ;)
 
DaveBaumann said:
There is no way that MS would pick an arbitary quantity of EDRAM and even less of ATI just implementing it because MS told them to.
Seeing as MS are paying ATi to do whatever they want, surely MS *can* say "be a good egg and stick 10.5 MBs EDRAM on, please?" and ATi say "Yes, sir. Certainly, sir. Whatever the gov'nor wants, sir!" ATi can hardly say "No, push off. We're not sticking any EDRAM on, and we're not giving you 32 unified shaders. Your getting 10, that's all, and tough!" MS are calling the shots and short of advice and technical limits, there's no reason to think the system isn't based on MS's ideas and not IBM's and ATi's.

Anyway, one can only hope this isn't true, because 30 fps sucks! My Amiga 500 could manage 50 fps (60 NTSC) some 20 years ago! Why on earth can't cutting edge tech manage it? You know, technological progress is a lie. I've seen digital broadcasts and they suffer compression artefacts like billy-o. Givew me good old analogue any day!

Actually, could this 30 fps claim be a sign of limited bandwidth on XB2, as data is streamed because it doesn't fit in the EDRAM frame buffer? If there's not enough throughput to manage 720p, 2xAA, 60 fps, cutting back to 30 fps makes sense.
 
Shifty Geezer said:
Seeing as MS are paying ATi to do whatever they want, surely MS *can* say "be a good egg and stick 10.5 MBs EDRAM on, please?" and ATi say "Yes, sir. Certainly, sir. Whatever the gov'nor wants, sir!" ATi can hardly say "No, push off. We're not sticking any EDRAM on, and we're not giving you 32 unified shaders. Your getting 10, that's all, and tough!" MS are calling the shots and short of advice and technical limits, there's no reason to think the system isn't based on MS's ideas and not IBM's and ATi's.

Anyway, one can only hope this isn't true, because 30 fps sucks! My Amiga 500 could manage 50 fps (60 NTSC) some 20 years ago! Why on earth can't cutting edge tech manage it? You know, technological progress is a lie. I've seen digital broadcasts and they suffer compression artefacts like billy-o. Givew me good old analogue any day!

Actually, could this 30 fps claim be a sign of limited bandwidth on XB2, as data is streamed because it doesn't fit in the EDRAM frame buffer? If there's not enough throughput to manage 720p, 2xAA, 60 fps, cutting back to 30 fps makes sense.


I don´t know about yours but my Amiga 500 (heck, even my A1200) couldn´t push the gazillions pixels, polys and all the vertex/pixel ops that the Xbox2 will be able to push..

Now, regarding the comment about 30fps.. really, has this guy spoken to EVERY xbox2 dev? No... for starters, Itagakis Team Ninja is dev:ing some titles and you can forget that for example, DoA4, is at 30 fps. Itagaki would commit seppuku if the game runs at 30fps... :D

Surely there will be games that runs at 30fps, but not all of the, not even close. OTOH, we will see some of the most beautiful games running at 30fps, with all the bang and whistles of Xbox2..

I prefer a rock solid framrate than 60fps with dips etc.
Of course, a solid 60 is "best" but it also has to do with what type of games, you really don´t need 60fps in for example an RPG, where you instead could focus on "mind bogologologologologoling" graphics and art and stuff...
 
In my mind it makes sense that ~10MB won't do much good and will in the end simply amount to wasted transistors. Once you start having to store bits of the frame buffer in another memory pool then the advantages (those being huge local bandwidth and the conservation of external bandwidth) diminish rapidly. Yea, you can do some simple things really fast, but simple things tend to be really fast anyway, and those transistors would probably have been better spent on other things.
 
BOOMEXPLODE said:
Once you start having to store bits of the frame buffer in another memory pool then the advantages (those being huge local bandwidth and the conservation of external bandwidth) diminish rapidly.
Bits? we are speaking about big chunks of the frame buffer.
Do you care to explain how the advantages rapidly diminish?
Swapping portions of the frame buffer 2 or 4 times per frame gives a negligible hit on the performance/mem bw.
 
Shifty Geezer said:
Seeing as MS are paying ATi to do whatever they want, surely MS *can* say "be a good egg and stick 10.5 MBs EDRAM on, please?" and ATi say "Yes, sir. Certainly, sir. Whatever the gov'nor wants, sir!" ATi can hardly say "No, push off. We're not sticking any EDRAM on, and we're not giving you 32 unified shaders. Your getting 10, that's all, and tough!" MS are calling the shots and short of advice and technical limits, there's no reason to think the system isn't based on MS's ideas and not IBM's and ATi's.

I don’t know how much EDRAM there will be in Xenon’s graphics any more than any non-nda’d developer is, beyond the so called leaks – there may be 10MB, there may be 10.5MB, there may be none, who knows. What I’m saying is that the figure that is chosen will not have been an arbitrary number for no reason. MS are paying ATI to create the graphics, but they are doing so because the architecture they already had in development best suited their requirements and because ATI has a vast knowledge of 3D hardware implementation – if MS turned around and said “we want <insert arbitrary figure> MB of EDRAM†(which I doubt) ATI’s engineers would have turned around and explained exactly what that would buy them in terms of resolutions / performance and if they weren’t good fits suggested other quantities and what trade-off’s it would bring.

You pay for expert knowledge to use that expert knowledge.

As an aside, though, I’m not entirely sure Xenon will be dealing with 32bpp or FP16 buffers.
 
i prefer 100frames/sec. I have a 100hz TV set.. or doesn't that matters actually? and btw

WHY 60 ? if you have a prograssive scan LCD or PLASMA screen does that differ from the 30/60 (25/50 pal) refresh on CRT tubes?
 
DemoCoder said:
10mb eDRAM is not enough for 720p double buffered or FSAA'ed and if using HDR, forget it.

Who needs to double buffer into edram. Thats pretty wastefull if you ask me (hint, copy on write is pretty damn easy and only requires 1b per pixel to support). And ~10MB should be enough to do HDR with FSAA with a little hardware support for non-compressable sample in DRAM which would be the rare case.

So I think it should be possible to support 720p double buffered FSAA HDR rendering with a ~10MB main buffer and a 25+ GB/s backing store.

Aaron Spink
speaking for myself inc.
 
hey69 said:
i prefer 100frames/sec. I have a 100hz TV set.. or doesn't that matters actually? and btw

WHY 60 ? if you have a prograssive scan LCD or PLASMA screen does that differ from the 30/60 (25/50 pal) refresh on CRT tubes?

DLP sets are also 60.
 
DemoCoder said:
10.5 > 10.0. No space left for texture cache, vertbex buffers, "unified shaders", render-to-texture ops must be copied to main memory, in fact, pretty much everything must go to main memory, except blending ops. Whoop de do.

I expected more from a next-gen console. Frankly, the Xbox2 is looking more and more disappointing as time goes on.

Use your brain DC. Why would you need to use the edram for texture cache when you have 25+ GB/s of memory directly attached? Use eDRAM for things its needed for (general main buffer) and let the main memory handle the rest.

You are reminding me of people who used to be disappointed that all of ram wasn't high speed cache.

Aaron Spink
speaking for myself inc.
 
DemoCoder said:
Now on the XBOX2 if I want to render a 720p or 1080i scene/texture in HDR with 4x samples, I have to resort to a contorted split frame technique.
Why? This is EASY to fix in hardware. Hell all the chips in PCs already have support to write non-compressable to extra memory.

If you want to worry, worry about how much dram they'll include as that will likely be the main limiter.

Aaron Spink
speaking for myself inc.
 
if all those 'insider leaks' are to be trusted, the overall xbox2 mem architecture looks more and more GC-like: UMA with decent bandwidth and just enough edram for a back buffer*. i was wondering how long it would take MS to realize that a back buffer in UMA land is a 'no do'.

btw, aaron, regardless of how cheap it is, i don't think a copy-on-write will do for a double-buffering scheme - your back buffer is not complete until the very last write to it.

* i do believe xbox2 will actually have just enough for a full 720p framebuffer; if they bothered to put in 10MB of it it wouldn't be that expensive to add .5MB more.
 
nAo:
I agree, but the problem of PS2 image quality has almost nothing to do with its limited edram.
I don't know; it seems like a pretty tight fit. Seems like if you want a nicely textured game with rendering precision and good display, one desire puts a lot of pressure on the other desire, like what happens in PS2 Soul Calibur 2 and other titles. In the 32-bit depth buffer mode, the game displays only in interlaced, and it drops to 16-bit depth when the front buffer has to be full height for proscan.

Of course, the MIP-mapping isn't a problem of the eDRAM size.
 
darkblu said:
btw, aaron, regardless of how cheap it is, i don't think a copy-on-write will do for a double-buffering scheme - your back buffer is not complete until the very last write to it.
In fact copy-on-write should be useful with just on chip small local memories, not with huge edram pools.

Lazy8s said:
I don't know; it seems like a pretty tight fit. Seems like if you want a nicely textured game with rendering precision and good display, one desire puts a lot of pressure on the other desire, like what happens in PS2 Soul Calibur 2 and other titles. In the 32-bit depth buffer mode, the game displays only in interlaced, and it drops to 16-bit depth when the front buffer has to be full height for proscan.
I don't know how Soul Calibur was designed and of course I can't comment on it, but I do know it's quite common and easy to upload 2/3 mb of compressed (4 and 8 bits clut textures) textures per frame in a small (5-700 kb, double buffered ) memory pool on the edram. All the remaining memory can be used for a full 512x512x32 bits + 32 bits zbuffer back buffer and a 640x512x32 bit front buffer (it's hard to spot the difference from a full back buffer and a 512x512 back buffer on a common television for a casual gamer). Textures upload on the GS run mostly at about 700 mb/s, so it's not a big deal to devote 25% of your EE-GS bus bandwith to textures.
 
Status
Not open for further replies.
Back
Top