XBOX2 graphic chip really using embedded DRAM?

EntombeD

Newcomer
Supposed to be leaked specs.....

The graphic chip will contain not only a graphics rendering core but up embedded DRAM acting as a frame buffer that is big enough to handle an image that is 480i and can be 4 times over sampled and double buffered. Yeah, we all remember Bitboys but this time you can bet this is for real. This solution will finally make possible HDTV visuals with full screen Anti-Aliasing on.

Could there be any truth to this? I never heard the R500 using embedded DRAM aka BitBoys? :?:

Oh sorry I forgot to put in the links
Besides the teamxbox link

Everything seemed to be based on this
http://forums.xbox-scene.com/index.php?showtopic=231928
 
EntombeD said:
Supposed to be leaked specs.....

The graphic chip will contain not only a graphics rendering core but up embedded DRAM acting as a frame buffer that is big enough to handle an image that is 480i and can be 4 times over sampled and double buffered. Yeah, we all remember Bitboys but this time you can bet this is for real. This solution will finally make possible HDTV visuals with full screen Anti-Aliasing on.

Could there be any truth to this? I never heard the R500 using embedded DRAM aka BitBoys? :?:

The quote doesn't make any sense. How is a frame buffer big enough for 480i supposed to "make possible HDTV visuals with full screen anti-aliasing?"
 
I say it is unlikely, I mean ATI haven't really done any major work in eDRAM in the past. It would be unlikely that they would suddenly put it into a major chipset release without prior experience with the technology on the desktop market.

It could happen but I doubt it.
 
Jabjabs said:
I say it is unlikely, I mean ATI haven't really done any major work in eDRAM in the past. It would be unlikely that they would suddenly put it into a major chipset release without prior experience with the technology on the desktop market.

It could happen but I doubt it.

The main NGC graphics chip is built around 3 Mb of Embedded DRAM so ATI is capable of doing this kind of job.
For me it's 100% sure that all next negeration graphics chip will contain EDRAM because is the only way to have the necessary amount of bandwidth.
 
Geeze Louise. That quote is from an old article TeamXbox did LAST Feb 1st. We talked about it here already...

http://www.beyond3d.com/forum/viewtopic.php?p=218207d#218207

http://news.teamxbox.com/xbox/5388/Xbox-2-Specs-Leaked-Update-

Nothing has been confirned since then. I'm starting to have some doubts about embedded DRAM being included. Though I will say I expect the resolution bar to be set at 720p. Regular TVs will probably just get scaled down output. Is it possible that they can do this without the need of embedded DRAM? If so, would it be a hardware or software method?

Tommy McClain
 
let's use some good old logic here.

embedded dram = good thing

but this is a good thing, and there's nothing good about Xenon, thus it can't be true.

hence xenon GPU having embedded ram is impossible. you might find some good old 120ns EDO RAM acting as the framebuffer though.
 
AzBat said:
Regular TVs will probably just get scaled down output. Is it possible that they can do this without the need of embedded DRAM? If so, would it be a hardware or software method?
Scale you mean? The type of RAM is irrelevant. If the leaked specs are anything to go by Xbox 2 will have a hardware scaler. I expect Revolution will as well. I reckon both systems are designed with flat panels in mind.
 
MrSingh said:
but this is a good thing, and there's nothing good about Xenon, thus it can't be true.
That's from the wrong forum :p
At B3D, the correct saying is
"but this is only a good thing for Xenon if there is at least 64MB of eDram, but possibly more. 10MB would make it underpowered and crap, even EDO ram would be better then that."
 
gonna need the ram and bandwidth for it too...


4 samples per clock or something? I forget how it goes.
 
Fafalada said:
MrSingh said:
but this is a good thing, and there's nothing good about Xenon, thus it can't be true.
That's from the wrong forum :p
At B3D, the correct saying is
"but this is only a good thing for Xenon if there is at least 64MB of eDram, but possibly more. 10MB would make it underpowered and crap, even EDO ram would be better then that."

*FLASHBACK*

My god i remember EDO RAM, i was the happiest guy on the block when i upgraded my first PC from 16 to a whopping 32MB of very-hard-to-find-and-soon-thereafter-discontinued EDO RAM!!
 
Fafalada said:
"but this is only a good thing for Xenon if there is at least 64MB of eDram, but possibly more. 10MB would make it underpowered and crap, even EDO ram would be better then that."

I'll take EDO RAM anytime!

where is the Faf? the big barrel crowd is expecting the Faf to show up again!
 
vliw said:
For me it's 100% sure that all next negeration graphics chip will contain EDRAM because is the only way to have the necessary amount of bandwidth.
I'm not so sure. EDRAM takes up a lot of silicon space that can be used for more pixel pipes. A 720x480i backbuffer would need just under 3MB if 4xAA is used, and I doubt they'd be aiming for just 480i rendering. I'm not even including the Z buffer, and without a second buffer you'd have to copy into main memory anyway instead of swapping. HDR will double memory requirements as well. I know framebuffer writing doesn't need the granularity or latency of CPU cache, but just look at how big 1-2MB of cache is on a current CPU. GameCube could have been a lot more powerful without the 1T-SRAM (if the die space was used elsewhere), but you need intelligent, well researched design to avoid bandwidth problems. I'm sure this is the reason why Matrox, S3, XGI, etc. can't compete with ATI/NVidia given the same bandwidth. The last problem with embedded RAM is techniques like render to texture. You need to write these textures into main memory anyway, so it's unlikely to be much of a gain.

With all the fancy Z and colour compression algorithms already under ATI's belt, I don't think it'll be a wise investment. Using a fraction of that space for internal FIFOs to make efficiency near 100% is much better, IMO. Ram is steadily going faster, but 720p TV's running at 60 Hz is fixed. Bandwidth will only assist in drawing more layers, and computation seems to be the name of the game for advanced graphics. Unless you do deferred rendering, large amounts of onchip memory for framebuffer purposes is not the right way to go.

You could be right, though. Given how heat is a pretty big problem nowadays, it may be that die size is not the limiting constraint, which is what my entire argument relies on. Does embedded RAM use a lot less power than math transistors? Seeing AMD's 90nm processors, however, it's not clear if power is a growing problem or shrinking one.

EDIT: Hmm, it looks like I overestimated the die space required for EDRAM. You can get well over 5 times the density of L2 cache. I still stand by my reasoning though.
 
Mintmaster said:
vliw said:
For me it's 100% sure that all next negeration graphics chip will contain EDRAM because is the only way to have the necessary amount of bandwidth.
GameCube could have been a lot more powerful without the 1T-SRAM (if the die space was used elsewhere),

I disagree, the GameCube did not lack performance... it lacked main RAM and storage space: 24 MB of main RAM (sorry, but I cannot really add the 16 MB of about 80 MB/s to a 2.4 GB/s memory pool that easily) and a 1.5 GB optical medium.

Bring main RAM to 64 MB or 32 MB and you would see the GameCube even farther ahead.

A GameCube with 32-64 MB of main RAM would have been the best machine thsi generation technically if you consider development ease: the CPU was an OOOe core with 256 KB of L2 cache and main RAM had quite low latency. The system is very forgiving and IMHO it allows developers to develop much more of their games in C/C++ code (less need to optimize the code in ASM, even a simple port of GCC can do quite a good job with the help of such a large L2 cache that reduces the times the CPU has to go to main RAM, with a CPU core that can do some work while it waits for data due to cache misses or dependent instruction stalling and with a low latency main RAM solution) compared to say PlayStation 2 where the in-order CPU core had NO L2 cache and only 8 KB of Data Cache (there is 16 KB of SPRAM, but that has to be managed manually by the programmers, GCC cannot do much with it if anythign at all).

I like PlayStation 2 for the performance and the flexibility, but the GCN out of the three is definately the most developer friendly IMHO.

The last problem with embedded RAM is techniques like render to texture. You need to write these textures into main memory anyway, so it's unlikely to be much of a gain.

This is how it is done on GameCube because what they have used is a Texture Cache realized in e-DRAM together with a tight on-chip memory pool for the frame-buffer and the Z-buffer.

Playstation 2's GS is actually in the best position of the consoles of its generation doing render-to-texture effects as it does not need to go to main RAM.
 
Back
Top