Is MoSys the reason of an underpowered Revolution?

Urian

Regular
Any person in this forums knows any info about the MoSys 1T-SRAM-Q at 90nm and it´s performance compared to another memories?
 
Urian said:
Any person in this forums knows any info about the MoSys 1T-SRAM-Q at 90nm and it´s performance compared to another memories?

In NEC site they say up to 600mhz of speed.
 
I don't understand what would bring you to a conclusion such as that, Urian. You might as well have asked if the slot-in drive was the reason for rev being percieved by you as 'underpowered'.

Have you checked out 1T SRAM and how it works? It's actually very fast, and if used in Rev will be a big help making sure the console can live up to its potential performance. Xbox stumbled a bit in this regard due to its DRAM UMA-based design, while it was quite hard to bog down the GC's memory subsystem from what I've read.
 
You use the word as if it would be some all-encompassing, catch-all term. Bandwidth, as defined how?

GC memory interface is 64 bits DDR for, I believe, roughly 2.6GB/s peak bandwidh. Compare with 128 bits in xbox for example and it's - I believe - 6.2GB/s. Is the GC "crippled" because of this difference? Don't look that way to me, seeing games running on the system.

You think the box comes even close to its theoretical max in a real game scenario where you may have anything from a dozen to several dozens of memory transactions going on at any one time?

Just off the top of my head, there's CPU reads, CPU writes (both of which can be highly random), framebuffer pixel reads and writes, texture reads, Z-buffer reads and writes (all of which can also be highly random), harddrive and optical I/O, ethernet I/O, joypad I/O, multiple sound memory read streams (up to 256 according to specs if I'm not mistaken) and probably more that I didn't think of.

1T SRAM has extremely low latency, making it very efficient and likely to hit near its theoretical max performance. DRAM on the other hand has quite high latency, especially on first-page access, making what might look like very high bandwidth memory suddenly drop to a much less impressive figure depending on the memory access pattern. 1T SRAM's bandwidth is dependent on bus width and clockspeed just like with DRAM, there's no difference there at all between the two technologies.

So either you still have no point, or you're have a flawed one. Which one is it? ;)
 
Thanks for the correction.

I believed that the bandwith of the MoSys 1T-SRAM could be a data bottleneck for the system but now I have seen that this isn´t the fact.

Thank you.
 
GC memory interface is 64 bits DDR for, I believe, roughly 2.6GB/s peak bandwidh.

Just FYI its 64bit SDR, the memory is clocked twice as fast as the GPU (324Mhz) and as you say has roughly 2.6GB/s peak bandwidth.
 
Last edited by a moderator:
Guilty Bystander said:
You do realise the DDR in the Xbox is 200MHz DDR effective (PC1600).
Which will give you 200MHz x 128bits : 8 = 3200MB/s.

Then why does MS say 6.4 GB/s? Whatever it is, a friend of mine did some benches with the a Gentoo-modded Xbox, and she kept getting the effective bandwidth somewhere between 3.2 and 3.6 GB/s. I wonder how Cube's 1T-SRAM compares--it's supposed to be a lot more efficient. I would imagine it stays closer to its theoretical peak.
 
You do realise the DDR in the Xbox is 200MHz DDR effective (PC1600).
Which will give you 200MHz x 128bits : 8 = 3200MB/s.

Your mistake there is that 128bit equals 16 bytes not 8 bytes. So its 200Mhz x 16bytes x 2 (DDR) / 1024 / 1024 = 6103MB/s (5.95GB/s) or 6.4GB/s in PR land :)
 
Last edited by a moderator:
Your mistake there is that 128bit equals 16 bytes not 8 bytes. So its 200Mhz x 16bytes x 2 (DDR) / 1024 / 1024 = 6103MB/s (5.95GB/s) or 6.4GB/s in PR land

128bit : 8 = 16Bytes with me I dunno how your math works.
Also the DDR in the Xbox is 100MHz or 200MHz DDR effective which also known as PC1600.

People make commenly make the mistake the Xbox is using PC3200/DDR400 while in fact it's using PC1600/DDR200.

PC1600 : 8 x 16Bytes = 3200MB/s or 200MHz x 128bit : 8 = 3200MB/s.
 
128bit : 8 = 16Bytes with me I dunno how your math works.

I just said it was 16bytes so my math works the same as yours :) I just misunderstood what you wrote. I thought you were saying that 128bit is 8 bytes rather then 128bit / 8.

People make commenly make the mistake the Xbox is using PC3200/DDR400 while in fact it's using PC1600/DDR200.

PC1600 : 8 x 16Bytes = 3200MB/s or 200MHz x 128bit : 8 = 3200MB/s

According to MS and everyone else its using 200Mhz DDR (400Mhz effective). Why else would they claim 6.4GB/s?
 
The memory speed is 200 mhz ddr, or 400 mhz effective, so stop doing the math as if it were 200 mhz effective.

400 mhz (200 DDR) x 16 bytes / 1000 = 6.4 GB/s
 
I just said it was 16bytes so my math works the same as yours I just misunderstood what you wrote. I thought you were saying that 128bit is 8 bytes rather then 128bit / 8.

Hehe no man I ment 8bit = 1Byte so I defided it by 8 but that doesn't matter I misunderstood you aswell. ;)

According to MS and everyone else its using 200Mhz DDR (400Mhz effective). Why else would they claim 6.4GB/s?

According to their own specs they're using PC1600 also PC3200 wasn't even finished by then.
The highest DDR available at that time was PC2700 or DDR333.

M$ would claim these phony bandwidth's because all the specs stated by M$ on the Xbox1 were phony.
M$ said the Xbox could do 125 million polygon/s, 2,93GFlop/s on the CPU, 80GFlop/s on the GPU, 4GTexel/s and 4GPixel/s of fillrates while in fact the Xbox can only do 10-15 million polygon/s PEAK only done in DOA Ultimate, 800-900MFlop/s on the CPU, 9,32GFlop/s on the GPU, 1,86GTexel/s and 932MPixel/s of fillrates.

Sony and Nintendo however didn't lie about their console specs allthough Sony always get accused of doing it while in fact their PS2 specs with the unveiling are the same as that of the final hardware.

Don't even get me started on the Xbox unveiling GDC2001 where M$ said the Xbox could do this and this is a direct quote from Edge UK:
1 billion polygon/s flatshaded
300 million polygon/s fully lit and fully textured
16 GTexel/s and 4 GPixel/s
100GB/s memory bandwidth
 
Guilty Bystander said:
Hehe no man I ment 8bit = 1Byte so I defided it by 8 but that doesn't matter I misunderstood you aswell. ;)



According to their own specs they're using PC1600 also PC3200 wasn't even finished by then.
The highest DDR available at that time was PC2700 or DDR333.

M$ would claim these phony bandwidth's because all the specs stated by M$ on the Xbox1 were phony.
M$ said the Xbox could do 125 million polygon/s, 2,93GFlop/s on the CPU, 80GFlop/s on the GPU, 4GTexel/s and 4GPixel/s of fillrates while in fact the Xbox can only do 10-15 million polygon/s PEAK only done in DOA Ultimate, 800-900MFlop/s on the CPU, 9,32GFlop/s on the GPU, 1,86GTexel/s and 932MPixel/s of fillrates.

Sony and Nintendo however didn't lie about their console specs allthough Sony always get accused of doing it while in fact their PS2 specs with the unveiling are the same as that of the final hardware.

Don't even get me started on the Xbox unveiling GDC2001 where M$ said the Xbox could do this and this is a direct quote from Edge UK:
1 billion polygon/s flatshaded
300 million polygon/s fully lit and fully textured
16 GTexel/s and 4 GPixel/s
100GB/s memory bandwidth

The 4 gpixels an gtexels are antialiased fillrate, so it's true they had it (a little less, after the 4gtexels/s figure they downclocked the nv2a). Xbox can do more than 15 millon polygons. There is a RPG that does near 25 million/s (Arx Fatalis IIRC). The GPU flops figure is counting the non programable areas, as nvidia keeps doing with the rsx. And the 800-900 Mflops of the cpu is purely invented It does less than the maximun peak rate, but not so less..

Please, stop trolling.
 
Last edited by a moderator:
Guilty Bystander said:
You do realise the DDR in the Xbox is 200MHz DDR effective (PC1600).
Which will give you 200MHz x 128bits : 8 = 3200MB/s.

It's only 200mhz? I always thought that Xbox used GDDR running at 400mhz effective, the same as used by the Geforce 3.

she kept getting the effective bandwidth somewhere between 3.2 and 3.6 GB/s

What was testing the bandwidth, because the FSB of the xcpu can't come anywhere near 3GB/s.

According to their own specs they're using PC1600 also PC3200 wasn't even finished by then.
The highest DDR available at that time was PC2700 or DDR333.

GDDR was already up to 233mhz I believe, twice what DDR was doing.

Xbox can do more than 15 millon polygons. There is a RPG that does near 25 million/s (Arx Fatalis IIRC).

At theoretical max, can't it do around 90 million polygons/s?(I think that's its net transform rate anyhow)
 
Fox5 said:
At theoretical max, can't it do around 90 million polygons/s?(I think that's its net transform rate anyhow)

I don't know, i was talking about in-game transform rate.
 
Fox5 said:
It's only 200mhz? I always thought that Xbox used GDDR running at 400mhz effective, the same as used by the Geforce 3.

You're right, it's 200mHz, "400mhz effective".

Was it GDDR or DDR though? I had thought DDR. Anyway, I believe some GF3's had 250mHz (500mHz effective) memory.
 
Last edited by a moderator:
function said:
You're right, it's 200mHz, "400mhz effective".

Was it GDDR or DDR though? I had thought DDR. Anyway, I believe some GF3's had 250mHz (500mHz effective) memory.

What's the difference between DDR and GDDR anyhow? If the Xbox was using DDR PC3200, it probably was at much more relaxed timings than what PCs eventually used, I don't think PC3200 was even a JEDEC standard when Xbox came out. Besides, the manufacturing capabilities to make true PC3200 with good yields probably weren't there yet. The cpu and graphics chip in the Xbox was downclocked to speeds that could get good yields (just about every nv20 chip could hit 230mhz, but that seemed to be the reasonable max for that chip) so I don't see why they would use low yield memory.
 
Back
Top