MS big screw up: EDRAM

Status
Not open for further replies.
I think the poster meant DirectX texture compression modes when he wrote "D3D compression".
 
Rys said:
I think the poster meant DirectX texture compression modes when he wrote "D3D compression".

So X360 has texture compression that is twice as good as what is used at present? Wouldn't PC IHV's like to know about this?
 
scificube said:
So X360 has texture compression that is twice as good as what is used at present? Wouldn't PC IHV's like to know about this?
Nope, just it has a 2:1 effective compression ratio (or more, or less, depending on what's going on) if the developer is clever. PC hardware has the same ability to compress a large chunk of what's in memory, it's just that people usually talk in raw uncompressed numbers.
 
Rys said:
Nope, just it has a 2:1 effective compression ratio if the developer is clever. PC hardware has the same ability, it's just that people usually talk in raw uncompressed numbers.

This is what I've been suspecting. It seems a play on ideas to me. Yes compression nets you more effective bandwith, but isn't it logical to assume compression would be used by anyone intelligent in the first place? So there's no real advantage here...just stating the obvious unless I'm mistaken. At least no advantage in the sense that it's something the X360 can do and the PS3 cannot or for that matter PCs. It's like saying your car will go faster if you put tires on your rims vs. not putting tires on your rims.
 
scificube said:
This is what I've been suspecting. It seems a play on ideas to me. Yes compression nets you more effective bandwith, but isn't it logical to assume compression would be used by anyone intelligent in the first place? So there's no real advantage here...just stating the obvious unless I'm mistaken. At least no advantage in the sense that it's something the X360 can do and the PS3 cannot or for that matter PCs. It's like saying your car will go faster if you put tires on your rims vs. not putting tires on your rims.
Yeah, you're right. Compression and effective bandwidth is always a tricky subject to nail down in this way, which is why talking in raw numbers (least common demoninator) is the best way to go about it in most cases.

There's going to be cases where hardware will have always-on high(ish) compression for Z or colour or vertex (and other data) in certain rendering modes, but because it might not be on full-time, or the developer's code might render certain types of compression schemes invalid, the whole "it's got 44GB/sec effective" way of thinking is kind of moot in a lot of ways, especially in architectures as disparate as Xbox360 and PS3.
 
From what I understand. the Xcpuu also has the same DX comrpession hardware that is built into the GPU. So the CPU can uncompress data in the exact way the GPU can. This would allow for more data to be compressed other than just DXT...
 
Qroach said:
From what I understand. the Xcpuu also has the same DX comrpession hardware that is built into the GPU. So the CPU can uncompress data in the exact way the GPU can. This would allow for more data to be compressed other than just DXT...

That would be interesting.

I wonder though...

Xenon surely needs to work on uncompressed data. After work is completed that data could then be compressed and sent to either Xenos or main memory and in compressed form would consume less bandwith. However, Xenos must decompress this data again before it can use it. (really fast for sure but it still takes some time)

RSX wouldn't need to decompress data if Cell did not compress it in the first place at the expense of cosuming more bandwith between them. Cell still could compress data the old fashion way and stick it in main memory for RSX's usage later if need be.

I wonder which approach is better.
 
If the comprtession/decompression is handled in hardware it should be transparent in use. The cost associated would be using up transistors for compression/decompression that could otherwise be used on other logic, but AFAIK the compression hardware is trivial. That is, hardware compressed formats have no penalty over uncompressed data formats.
 
I listened to Cube is efficient for years. It was underpowered is what.

Compared to what? XBox which cost twice the price to make.

But they had terrible sales.

Not a win.

You must have an extremely simplistic view of what winning and losing is in the video game business, and a very strange view at that.
 
Shifty Geezer said:
If the comprtession/decompression is handled in hardware it should be transparent in use. The cost associated would be using up transistors for compression/decompression that could otherwise be used on other logic, but AFAIK the compression hardware is trivial. That is, hardware compressed formats have no penalty over uncompressed data formats.

I was thinking that. Oh well.
 
scificube said:
I truly don't understand how this D3D compression is supposed to work...I thought it was CPU overhead that needed to be removed. Overhead that OpenGl doesn't have already. If this is referring to new compression techniques for textures etc then I'd like to hear about them.

If anyone can post where MS stated they could get double the bandwith in the overall in the X360 due to this compression I'd appreciate it. Info on just what is being compressed or how it works would be even better.

(I mean 2x the bandwith using these new compression techniques vs. methods of compression commonly used now)

I believe what he was referring to was the Vertex compression of the VMX-128 unit. The CPU does not compress all data, but the VMX-128's do compress and decompress vertex data in hardware.
 
> "22 GB/s memory bandwidth+D3D compression=effectively 44 GB/s memory bandwidth"

Well D3D texture compression gives you 6:1 compression, so it might be more accurate to say 132 MB/sec. Since we are making comparisons based on ACTUAL bandwidth, and all the modern GPU's support D3D texture compression, it might be better to just stick to using the actual rate.

Bill's overly emotional response as if the Xbox 360 is doomed because it's using some EDRAM is just crazy. It like thinking the next generation is going to be won, based on the level of anti-aliasing done in a game.
 
Bill

Xbox360 has a "Unified Memory Architecture". This means that evey piece of hardware shares the same memory pool.

So the 3 CPU core + GPU + and other little devices share 22GB/s of memory bandwidth. Now would it not be better for the GPU to have some of its own special memory so that it takes up less of the 22GB/s bandwidths.

Example

No EDRAM

CPU1 + CPU2 + CPU3 + GPU =22GB/s
Lets say the GPU takes up 19GB/s which leaves the CPU's the 3GB/s

With EDRAM
GPU+ CPU1 + CPU2 + CPU3 =22GB/s
|
EDRAM =32GB/s

When the GPU uses EDRAM it frees up more of the normal RAM bandwith.
So now the GPU takes up 8GB/s and leaves 14GB/s for the CPU's

PC GPU's dont need EDRAM because they have their own RAM. Imagine buying a PC GPU that came with no RAM and had to use the same memory as your CPU(slow)

So in the end you can think of it like this

PC: ATI x1800 256MB
XBOX360: ATI Xenos 10MB ( the added bonus of Xenos RAM is that it can other tricks too)
 
Shifty Geezer said:
If the comprtession/decompression is handled in hardware it should be transparent in use. The cost associated would be using up transistors for compression/decompression that could otherwise be used on other logic, but AFAIK the compression hardware is trivial. That is, hardware compressed formats have no penalty over uncompressed data formats.

Assuming you don't mind whatever latency penalty is associated with the compression/decompression...

As for the main topic of this thread: The edram solution strikes me as a means to an end. I assume that Microsoft probably said to ATI something along the lines of "Antialiasing is a key feature for us this generation and we want all of our developers using it. Make it really cheap to use so that they don't complain to us that they have to turn it off to get good framerates". This isn't saying tha AA is the only thing the edram is good for, but I imagine it was one of the driving forces behind its development.

I think the important question here is whether or not Antialiasing is the right goal this generation. Personally I *love* AA, and I would be incredibly happy to see something like this on the PC. I would also be rather unhappy if most games on the PS3 ended up lacking any kind of AA. On the other hand, I know a number of (smart!) people who have high end video cards and don't bother turning it on because they just don't care.

Nite_Hawk
 
Black Dragon37 said:
According to Kutaragi-san, for a GPU that's gonna use 2 HDTVs, it's not needed.

Not that it's not needed - moreso that it would need to be immense.

Xenos' daughter die circumvents this with it's tiling scheme - certainly I'm sure Kutaragi would have loved to implement some eDRAM if it had been at all practical.

As it stands now though I think they're set to do well with what they have. We'll just have to see if any graphics subsystem details are revealed in the coming days/week.
 
Here's a question : Do we think MS went to ATi and said 'we want such and such', or they went to ATi and said 'what have you got' and ATi replied 'we've been experimenting with eDRAM and US', or the two sat down together and discussed ideas until they reached something they were happy with?
 
i think edram is the way to go, look at ps2 (lack of dedicated gpu) and GC and how both can compete with xbox graphics (i said compete not beat) with xbox having a much more robust gpu.
 
Status
Not open for further replies.
Back
Top