Benefits of having 2x the RAM? (Xbox2)

I think they can make it work: the problem might be make it slow enough ;).

Imagine a dual channel solution, with 2 DRAM modules per channel and providing 64 bits for the data bus ( 128 pins ).

If they change the clock ( the driver could do it ) from 400 MHz ( that produces a 3.2 GHz signalling rate ) to 100 MHz we would have a signalling rate of 100 * 4 = 400... 400 * 2 = 800 MHz.

Now, disable one of the two channels and you end up with a 32 bits bus with an effective data signalling rate of 800 MHz.

800 MHz * 4 bytes/clock = 3.2 GB/s.

To tell you the truth, XDR would still be a little faster in the worst case scenarios of Direct RAMBUS as it does not multiplex the data and address busses, but has two separate paths. Also each DRAM module would have two 8 bits bi-directional busses thus enabling parallel LOADs and WRITEs.
 
Fafalada said:
That Xenon diagram was pretty explicitly an UMA as far as I could see. All the system parts address and feed off the same memory pool.

But there was also 10MB eDRAM on the GPU, so there would be 2 pools of memory.
 
jvd said:
Even if sony fabs the ram (never heard that before )

It's been discussed here earlier, and Sony/Toshiba has a license to manufacture XDR, just like they got with RDRAM for PS2.

They still ahve to pay to make the fab

Yes, but fabs are LONG-TERM investments, they're not meant to be paid back in the first few months of operation. They might not even be paid fully until they've been upgraded one or more times... Look at AMD's Dresden facility, it's been up and running since 99 or something like that, and they've only pretty recently paid back the loans on it I believe.

Not only that but unlike ddr400 ram this ram will only go into one thing .

SO? That's not a problem with ASICs so why should it be a problem with RAM chips? Don't be silly. :)

The ram will be expensive.

Not neccessarily, as the market laws of supply and demand don't apply when you manufacture your components yourself.

THen you get into the part where if the system isn't coming out till 2006-7 your going to have alot of fabs sitting around doing nothing which costs money.

It's not doing nothing, it's fabbing at .90, and they're trimming in their .65 gear, which is when the fab will ACTUALLY start running.

You guys seem to think that fabing chips don't cost money.

But YOU know better huh, is that it? ;)
 
Guden Oden said:
Fafalada said:
That Xenon diagram was pretty explicitly an UMA as far as I could see. All the system parts address and feed off the same memory pool.

But there was also 10MB eDRAM on the GPU, so there would be 2 pools of memory.

In the strictist sense, make that 3 pools of memory. Cache that isn't backing main RAM has to considered a (small) memory pool.
 
Megadrive1988 said:
512MB of RAM in combination with streaming levels like Halo would be wicked. High-res textures

I think the potential of streaming from flash memory (or whatever solid-state storage they use) to 512 MB RAM would be pretty awesome. flash memory being much faster than even a fast HDD. 8)


:oops: :oops: :oops: :oops:

truly streamed world......hehe maybe travel all around a halo in one go :p
 
flash memory being much faster than even a fast HDD.
From my real life experience, flash memory (various CF, MMC and SD cards) can't even achieve a fragment of good HDD's speed. On the other hand, I think that company that is making that Flash drive (or whatever) for Xbox 2, is using some kind of more advanced technology than what is used in regular flash cards.
 
Paul said:
Capable in terms of Polygon rendering power? Particle?

We know that they plan on having the thing deliver a Teraflops and a TOPS performance.

If it's polygon power your talking about... BE, assuming a Teraflops, would be able to deliver more polygons than you could ever use.

Oh, very big words there, people will always find a way to use up power, because even if they can't make things better, they can make more, and if they can't do either, they can find a less effecient way of doing things.
 
Panajev said:
I think they can make it work
Well it'd be neat if they did. Although there is some argument in favour of localizing sound memory, I think that having availability of direct APU processing to sound is much more important then saving a few bytes of shared bandwith. :p

Guden Oden said:
But there was also 10MB eDRAM on the GPU, so there would be 2 pools of memory.
No, that would be at least 3, as CPU has a sizeable configurable scratchpad also(at least 512KB from what I can tell).
But then if we're counting transient local pools, why not include all caches too?
And if we want to be really anal, we should count memorymapped registers as separate pools of memory too :? Counting all That makes what, a hundred memory pools for each system?

Meanwhile, there's still just one main-memory pool, that all units address. I don't have a formal UMA definition handy right now, but I think that qualifies.
 
Fafalada said:
But then if we're counting transient local pools, why not include all caches too?
And if we want to be really anal, we should count memorymapped registers as separate pools of memory too :?

No need to go overboard here man. :) Registers are generally only addressable by the device they're attached to and they're not continuous memory; they're all discrete units, while caches aren't addressable at all, so they don't exactly fit under the standard RAM definition... Why make things unneccessarily complicated?

According to that leaked image (which Deadmeat thinks is fake despite MS scaring at least one site to have it removed, probably because it had PPC-based CPUs rather than the chip he scryed using his magic ouija board; the power5 :LOL: ), there ARE at least two discrete pools of memory. Now, even if the GPU eDRAM is addressable by all units in the system, it's still a separate pool of RAM. This speaks against the UMA concept; you might just as well call a standard PC with an AGP card an UMA box as well; there you can actually DMA straight into video mem from an expansion slot, or maybe even USB/firewire controller.
 
Re: ...

Deadmeat4 said:
Not neccessarily, as the market laws of supply and demand don't apply when you manufacture your components yourself.
Of course it does, especially when somebody else can build it cheaper than you do inhouse.
Guess it all depends on how cheap the cheaper version is after they package it and sell it to the less cheap one's manufacturers. In the end it's much easier for Sony to upgrade their own fabs, have complete control, and not worry about other people's problems. Look What happened to Nvidia with NV30. And it keeps happening to both Nvidia and ATi, who are always sitting there waiting for other people (those who actually manufacture the chips) to sort out their technology.
Sony will have complete control over their work, and also complete responsibility should anything go wrong. Which would be a great day for you, right DM...
 
No need to go overboard here man. Registers are generally only addressable by the device they're attached to and they're not continuous memory;
Not at all, memory mapping is often used to address external device registers. For instance, EE has mapping for ~30 GS registers.
And each of them is a continous 64bits of memory :LOL:

there ARE at least two discrete pools of memory.
I don't dispute that, but I don't think you can dismiss functional role when discussiong the arhitecture like this.
Like I said, image shows CPUs L2 addressable from the GPU, and that will only work if at least part of it is switchable between cache and linear mem. (just so happens that this kind of configurable L2/L3 has been a common feature to IBMs PPCs at least since 750cx, including Gekko).

Now assume for a moment there was no eDram on GPU, so we're just left with this CPU block. So everytime a program switches the mode between SPR and Cache, the whole system changes from non-UMA to UMA or back?
 
Re: ...

Deadmeat4 said:
Custom parts never beat off-the-shelf stuff on pricing.

Huh huh... Tell MS that...
Still inferior to dedicated DRAM fabs like Samsung's.....
Have u been there? Have u seen Sony's product? Have you had the chance to compare them to "superior" products like Samsung's?
Or you have more problems to deal with in your hand, which raises cost...
Sure but at the end of the sday, if they were to use a 3rd party and teh 3rd party had problems, Sony (or anyone else) would suffer anyway, because the third party would still need to recoup the costs incurred to solve thir own little issues, therefore potentially charging Sony more.
Different situation. DRAM is a commodity product, GPU aren't...
If u say so... :|
 
Re: ...

Guden Oden said:
Deadmeat4 said:
The problem is that both Xbox Next and PSX3 will have unified memory architecture

PS3 isn't UMA, and neither is nextbox by the looks of it, if that leaked document tells the truth.
Hey whatever, UMA is just a word (or abbreviation). It's like discussing if Pluto is a planet or an asteroid, pretty pointless, and really doesn't change the facts.
 
Oh, very big words there, people will always find a way to use up power, because even if they can't make things better, they can make more, and if they can't do either, they can find a less effecient way of doing things.

Using the rendering techniques we use today, Broadband Engine(assuming patent embodiment) would be able to push around more polygons than you would ever need, everything would be curved.

The burden would then be pushed onto developers more and time constraints.

"Sure... we have the power to create a tire with a thousand polygons... however we don't have the time for this shit"

I think next gen, with the PS3 especially(polygon monster it seems), this is going to be more and more common place, developers will have the power to push insane amounts of geometry into everything, however time constraints will be of huge issue so will the skill of the artists.

I am predicting a huge demand for skilled 3D artists next gen, didn't Naughty dog have a job opening not too long ago for an artist which could create and skin 50K polygon characters if I'm not mistaken?
 
Using the rendering techniques we use today, Broadband Engine(assuming patent embodiment) would be able to push around more polygons than you would ever need, everything would be curved.

I do not think PS3 will provide more polygons than you'll ever need. there is always room for more.

i.e. lets say a PS3 Kessen or LOTR game has 500 models on screen and each are 2000 polygons (just an example) well you could have a CGI with 500 models each 30,000 polygons which you could easily see the difference in complexity.

personnally I don't we'll stop needing more geometry until at least PS4 if not PS5.


with that said, I expect PS3 to be a huge improvement over PS2.
 
Megadrive1988 said:
.e. lets say a PS3 Kessen or LOTR game has 500 models on screen and each are 2000 polygons (just an example) well you could have a CGI with 500 models each 30,000 polygons which you could easily see the difference in complexity.

For that you'll have a good LOD implementation.
 
Megadrive1988 said:
i.e. lets say a PS3 Kessen or LOTR game has 500 models on screen and each are 2000 polygons (just an example) well you could have a CGI with 500 models each 30,000 polygons which you could easily see the difference in complexity.

I very much doubt that, because in order to fit 500 models on screen they wouldn't be much more than 2000 pixels each anyway, if that much. Even Pikmin's little buggers never approaches 500 on screen at a time and it's already damn crowded.

With proper LOD handling, you'd only ever have around 30,000 polys in a model if it filled most, or at least much of the screen.
 
Suggestions somewhat offtopic to give more perspective-> look outside of your window, Now Let`s Just think for a sec that this window is our framebuffer 50 years from now. You are most propaply in the city , So you must compare that scene to the expectations you are having for GTA5. The Lucky ones who are in the woods are offcourse exploring the beatiful land of huryle throught "Virtual headset" vs (Zelda N256bit).

What do you think ps3 or cube2 at least :LOL:
 
The diagram shows 22.4 GB/s RAM bandwidth. That memory won't be terribly expensive in the expected timeframe.
 
Back
Top