Revolution out in mid-2006, uses MoSys Ram

Fox5 said:
GwymWeepa said:
PC-Engine said:
The benefit is: "it's like having a very fast L3 cache for your main RAM" aka low latency. :p

Now...why isn't it used everywhere? Why is the only real big project I've heard about the gamecube...couldn't pc cards use this?

1. It is low latency, but also rather low bandwidth. I wouldn't be surprised if it had a tenth of the latency of xbox's ddr ram, but it was also 1/3rd the speed.

Video cards don't need it, high bandwidth tends to help more, cpus would need new memory controllers and drivers, and really it would have needed some kind of market push.

It also is more expensive to produce than DDR ram, and would have less of a market.

yes the main 1T-SRAM memory in Gamecube has approx 1/3 the bandwidth of Xbox's DDR- RAM, but that is off-set greatly by the far higher 1T-SRAM embedded memory bandwidth on Flipper, which Xbox has no equivalent.
 
It's flawed to compare memory bandwidth of GCN to Xbox. It only makes sense if you compare per pin bandwidth if you're comparing specific memory technologies.
 
PC-Engine said:
It's flawed to compare memory bandwidth of GCN to Xbox. It only makes sense if you compare per pin bandwidth if you're comparing specific memory technologies.

I'd assume DDR would still have an advantage.

BTW, bandwidth per pin? But you can up bandwidth by increasing clock speed, which doesn't require increasing pins, I'd think bandwidth per cost would matter more.
 
Fox5 said:
PC-Engine said:
It's flawed to compare memory bandwidth of GCN to Xbox. It only makes sense if you compare per pin bandwidth if you're comparing specific memory technologies.

I'd assume DDR would still have an advantage.

BTW, bandwidth per pin? But you can up bandwidth by increasing clock speed, which doesn't require increasing pins, I'd think bandwidth per cost would matter more.

You don't know what the costs are...
 
PC-Engine said:
Fox5 said:
PC-Engine said:
It's flawed to compare memory bandwidth of GCN to Xbox. It only makes sense if you compare per pin bandwidth if you're comparing specific memory technologies.

I'd assume DDR would still have an advantage.

BTW, bandwidth per pin? But you can up bandwidth by increasing clock speed, which doesn't require increasing pins, I'd think bandwidth per cost would matter more.

You don't know what the costs are...

According to an article I think someone posted, it was something greater than the cost of DDR ram, up to the cost of RD ram.
 
16 MB embedded 1T-SRAM for Hollywood's frame-buffer?


MoSys didn't reveal how much RAM would be going into each Revolution console - but in an unrelated story also doing the rounds about Revolution today, Chinese website Unika.com claims to have seen an actual specification for the hardware.

According to the site, the console will boast four 2.5Ghz IBM G5 Custom cores, with 128KB of level 1 cache and a 512KB shared level 2 cache, while the graphics will be powered by a dual core ATI RN520 chipset, with 16MB of on-board eDRAM for the frame buffer.

that'd be pretty nice. it's 3 to 6 MB more than what Xbox360 is reportedly getting.

though neither Xenon GPU nor Hollywood would, therefore, have as much embedded memory as GS I-32.
 
Megadrive1988 said:
16 MB embedded 1T-SRAM for Hollywood's frame-buffer?


MoSys didn't reveal how much RAM would be going into each Revolution console - but in an unrelated story also doing the rounds about Revolution today, Chinese website Unika.com claims to have seen an actual specification for the hardware.

According to the site, the console will boast four 2.5Ghz IBM G5 Custom cores, with 128KB of level 1 cache and a 512KB shared level 2 cache, while the graphics will be powered by a dual core ATI RN520 chipset, with 16MB of on-board eDRAM for the frame buffer.

that'd be pretty nice. it's 3 to 6 MB more than what Xbox360 is reportedly getting.

though neither Xenon GPU nor Hollywood would, therefore, have as much embedded memory as GS I-32.

LOL is that the same info that originated from the guy at GAF? They could add 32MB of eDRAM but why would they need to?
 
It makes any sense put 2 R520 instead a costum GPU :?:
I think it dont would be a beast for PC but lot of things (like ultra high rez) would not be used, those trans could have a better job in other features...

Anyone agree :?:
 
PC-Engine said:
Actually consoles would need about the same fillrate as PCs to render at 1920x1080 HD resolutions.

Or just do what 360 is said to do, just upscale it. I know people here don't like that, but once 360 does it, I think it'll be alright :D
 
V3 said:
PC-Engine said:
Actually consoles would need about the same fillrate as PCs to render at 1920x1080 HD resolutions.

Or just do what 360 is said to do, just upscale it. I know people here don't like that, but once 360 does it, I think it'll be alright :D

What exactly does upscaling look like? Is it just like stretching a low res image on a monitor to fill the full screen?
 
Dual core R520?

From what I understand, the R500 (Xbox) is totally different tech than the R520 which is currently being delayed from PC release because there's no market, right? The R520 is just an extension of essentially the same tech as the venerable 9700, right?

I've gotten the impression that the R500 (Xbox) is superior (perhaps vastly) to the R520 that ATI has "ready" to produce for the PC.

So that begs the question, would a dual core R520 even be better than the R500 in the Xbox?

Also.. Perhaps somebody can clue me in, but I thought I read a poll or story somewhere about the dual core Intel/AMD processors and it went on to ask people to guess how long until ATI/nVidia start making dual core GPUs... as in, it hasn't happened yet.

The majority opinion, IIRC, was that ATI/nVidia had no reason to move to dual core GPUs because unlike CPUs, they hadn't hit a wall in processor speed, so they didn't need to go parallel in order to achieve significant improvements.

So which is it?
 
RancidLunchmeat said:
Dual core R520?

From what I understand, the R500 (Xbox) is totally different tech than the R520 which is currently being delayed from PC release because there's no market, right? The R520 is just an extension of essentially the same tech as the venerable 9700, right?

I've gotten the impression that the R500 (Xbox) is superior (perhaps vastly) to the R520 that ATI has "ready" to produce for the PC.

So that begs the question, would a dual core R520 even be better than the R500 in the Xbox?

Also.. Perhaps somebody can clue me in, but I thought I read a poll or story somewhere about the dual core Intel/AMD processors and it went on to ask people to guess how long until ATI/nVidia start making dual core GPUs... as in, it hasn't happened yet.

The majority opinion, IIRC, was that ATI/nVidia had no reason to move to dual core GPUs because unlike CPUs, they hadn't hit a wall in processor speed, so they didn't need to go parallel in order to achieve significant improvements.

So which is it?

I thought GPUs were already parallel?
 
Fox5 said:
V3 said:
PC-Engine said:
Actually consoles would need about the same fillrate as PCs to render at 1920x1080 HD resolutions.

Or just do what 360 is said to do, just upscale it. I know people here don't like that, but once 360 does it, I think it'll be alright :D

What exactly does upscaling look like? Is it just like stretching a low res image on a monitor to fill the full screen?

Depends on the upscaler, but the idea is something like that.
 
Fox5 said:
V3 said:
PC-Engine said:
Actually consoles would need about the same fillrate as PCs to render at 1920x1080 HD resolutions.

Or just do what 360 is said to do, just upscale it. I know people here don't like that, but once 360 does it, I think it'll be alright :D

What exactly does upscaling look like? Is it just like stretching a low res image on a monitor to fill the full screen?


Pretty much, depending on the scaler it can look pretty good or pretty bad. Different software dvd players on the pc can look vastly different due to the scaling they perform.

IMO the 360 will render everything at 720p internally and either upscale or downscale depending on the settings in the dashboard.
 
RancidLunchmeat said:
Fox5 said:
I thought GPUs were already parallel?

Shows how much I know.

But if that's the case, doesn't it only make my question all the more relevant? Why the move to dual core then?

R520, will most likelt has higher fill rate than what Xbox is getting and faster too.

R520 will most likely be pretty big chip. The current chip according to B3D is already 240 mm2, that's bigger than Cell. So my understanding its actually going to be two seperate chips, not dual core on a chip like dual core CPU. I doubt Nintendo would do that though. But who knows.
 
What's the advantage to going dual core over increasing the complexity of a chip? I'd imagine there's quite a bit of redundancy with dual core, while a more complex single core chip can just focus on the areas that need help.

Like instead of a dual core athlon 64, why not just load up an athlon 64 with tons of cache or make it able to process more instructions per clock?
Instead of a dual core gpu, why not double the number of pixel pipelines?
 
R520 is said to have 300 to 350 million transistors, which is more transistors than Cell prototype (234 million) or the newer revised Cell (250 million)

they're all supposedly on 90 nm (Cell ver1, Cell ver2, R520) so unless i am missing something here, naturally R520 would be a bigger chip than either Cell.
 
What's the advantage to going dual core over increasing the complexity of a chip?
one advantage might be to render indepently to 2 different displays. perhaps revolutions "revolutionary" part is that it's multimonitor. but i don't believe these specs for a second. "G5" is an apple term, not an IMB term. and the possiblity of revolution having a stock r520 GPU and maintaining 100% backwards compatability i think is slim.
 
Back
Top