Qimonda wins contract to supply chips for Xbox 360

I wonder when they will move to 1024 Mbit chips? The volumes of the 360 are already pretty high. Maybe thay wait until more products are willing to share the initial transition costs. I guess it will probably take another year until there are graphics cards using 1024 Mbit chips (1 GB cards 256 bit bus), perhaps they will even move to GDDR4 before that.

Is it possible that graphis memory chips will move to a 64 bit data path sometime in the future? Is there some rationale for not doing that, chip size etc. ?
 
Crossbar said:
I wonder when they will move to 1024 Mbit chips? The volumes of the 360 are already pretty high. Maybe thay wait until more products are willing to share the initial transition costs. I guess it will probably take another year until there are graphics cards using 1024 Mbit chips (1 GB cards 256 bit bus), perhaps they will even move to GDDR4 before that.

I doubt they will ever change chips. There is no reason to do so. All 360 games will be based around the current hardware, and all adding new chips would do is increase cost, it wouldn't increase the systems performance.

They'll stick with what they are using now, and simply take advantage of price drops as newer and better chips are released.

Is it possible that graphis memory chips will move to a 64 bit data path sometime in the future? Is there some rationale for not doing that, chip size etc. ?

See above. It would be an added cost with no added benefit because all of the games will be designed around the current hardware.
 
bit not byte

I think he meant the actual chips on the ram stick not the total system ram. :)

It should shave some costs off the box.
 
There aren't any sticks of RAM in the 360, it uses discrete memory devices surface mounted straight to the mainboard (like on a graphics card).

Possibly a cost reduction could be attained in the future by doubling memory density and halving the number of chips, but higher density memory is often more expensive initially per megabyte, so it could be a while before any cost benefit would be seen. It could also prompt hardware changes, mobo revision etc, which would bring additional costs which would have to be weighed against any possible savings.

It could also possibly screw with lowlevel hardware details, stuff like the number of memory pages the RAM controller can keep open at any one time. Fewer chips = fewer open pages, which would have a negative performance impact. The same could easily be the case with a possible future 64-bit memory device (as opposed to today's 32-bit) I might add, plus you'd definitely need a new mobo in that case.

So in all, I wouldn't expect it really.
 
Guden Oden said:
There aren't any sticks of RAM in the 360, it uses discrete memory devices surface mounted straight to the mainboard (like on a graphics card).

Possibly a cost reduction could be attained in the future by doubling memory density and halving the number of chips, but higher density memory is often more expensive initially per megabyte, so it could be a while before any cost benefit would be seen. It could also prompt hardware changes, mobo revision etc, which would bring additional costs which would have to be weighed against any possible savings.

It could also possibly screw with lowlevel hardware details, stuff like the number of memory pages the RAM controller can keep open at any one time. Fewer chips = fewer open pages, which would have a negative performance impact. The same could easily be the case with a possible future 64-bit memory device (as opposed to today's 32-bit) I might add, plus you'd definitely need a new mobo in that case.

So in all, I wouldn't expect it really.

Thank you for sharing your knowledge on the subject guden and the potential pitfalls to financial gain through hardware revision. However, would you agree that at some point in the next year or so they might take a stab at a motherboard revisions along with the 65nm gpu/cpu processes and at that time revisit the issue of 512mbit chips vs 1024mbit chips for cost reduction?
 
I'll bet you Infineon is supplying faster memory this time around, so that off-spec modules are more likely to meet Microsoft's minimum requirement. ;)
 
Guden Oden said:
It could also possibly screw with lowlevel hardware details, stuff like the number of memory pages the RAM controller can keep open at any one time. Fewer chips = fewer open pages, which would have a negative performance impact. The same could easily be the case with a possible future 64-bit memory device (as opposed to today's 32-bit) I might add, plus you'd definitely need a new mobo in that case.

So in all, I wouldn't expect it really.
Thanks, I did not think about the memory pages, I thought it was a given that the 360 would move to a 4 chip memory solution in the future to save costs, but as you say that may have negative performance impact.

But would a 64 bit data path really be a similar case, I mean the the cache lines are never shorter than 8 bytes, so the CPU would want to update the complete line anyhow.
 
Crossbar said:
as you say that may have negative performance impact.
I'm totally speculating of course, but it's something that might be important. Memory buffers and pipelines in xenos are likely dimensioned for the current memory chip setup.

But would a 64 bit data path really be a similar case, I mean the the cache lines are never shorter than 8 bytes, so the CPU would want to update the complete line anyhow.
There's not just a CPU in the 360 tho, heh. In fact, I could well imagine xenos consumes even more main memory bandwidth than the CPU does.

TheChefO said:
However, would you agree that at some point in the next year or so they might take a stab at a motherboard revisions along with the 65nm gpu/cpu processes and at that time revisit the issue of 512mbit chips vs 1024mbit chips for cost reduction?
I guess everything is possible, but like I said, when a new chip density is released it is often (majorly) more expensive than the current standard chip. A price premium of 100% on the consumer level is not at all unheard of. Also like I said, the behavior of the memory subsystem WOULD change by reducing the number of chips - for the worse I might add. Which would make it less likely MS would do this.

When MS wants to cost-reduce the system, the biggest earnings would most likely be made by shrinking the main chips and eliminating the separate DRAM die in xenos, not by halving the number of RAM chips...

My speculation of course.
 
Guden Oden said:
I guess everything is possible, but like I said, when a new chip density is released it is often (majorly) more expensive than the current standard chip. A price premium of 100% on the consumer level is not at all unheard of. Also like I said, the behavior of the memory subsystem WOULD change by reducing the number of chips - for the worse I might add. Which would make it less likely MS would do this.

When MS wants to cost-reduce the system, the biggest earnings would most likely be made by shrinking the main chips and eliminating the separate DRAM die in xenos, not by halving the number of RAM chips...

My speculation of course.


Agreed, those will be efective measures to reduce cost. Did ps2 stick with the same size ram chips throughout it's life cycle? I know larger chips are slower but are they that much slower to avoid them outright if a pricing advantage becomes available?
 
To contrast what Guden has said, while there are issues that need to be accounted for, part of planning and cost reduction is taking these things into consideration. Most consoles, at some point, do consolidate memory. While larger memory modules are more expensive initially, having fewer will save money in the long run. This stuff is usually part of the console design process. MS *really* screwed up here last time and everything we have heard from them is the console, top to bottom, is designed to drop in price over time. Eventually MS is going to want the Core to reach $99-$129 and buying fewer and smaller memory chips is part of that process. It is safe to say that at some point they will transition to denser memory modules. When is really depending on price and when they plan to do board revisions.
 
Guden Oden said:
But would a 64 bit data path really be a similar case, I mean the the cache lines are never shorter than 8 bytes, so the CPU would want to update the complete line anyhow.
There's not just a CPU in the 360 tho, heh. In fact, I could well imagine xenos consumes even more main memory bandwidth than the CPU does.
Yes, but wouldn't the GPU be even more prone to read/write large blocks of consecutive memory? If we look at a general case of a low end graphic card with 64 bit memory bus, I think it could benefit from a one memory chip solution. Let us say a 64 MB chip for frame buffer and fetching textures from main memory over the PCIE. This would be a high volume chip that could be used in the complete range of graphic cards from low end to high end, I think there will some economy of scale here.

I mean the DRAM chips have moved from 8 bit to 16 bit to 32 bit data paths. Will they sometime soon move to 64 bits? I believe GDDR is the type of memory that would benefit most from such a move.

I know this is a side discussion, but I couldn't help myself. :)
 
Crossbar said:
Yes, but wouldn't the GPU be even more prone to read/write large blocks of consecutive memory?
I don't think so. Considering how many different streams to memory a GPU is handling - framebuffer, render-to-texture and Z-buffer read/writes, texture reads (often multiple per pixel, with many texels read at a time for certain filter modes), shader program reads, command list reads and so on - large block accesses isn't the first thing that enters my mind. :p Oh, and take all this x2 for the two quads in xenos as well. :)

If we look at a general case of a low end graphic card with 64 bit memory bus, I think it could benefit from a one memory chip solution.
Not applicable to our case here, but I'm sure you're right, from a cost standpoint. Performance, probably not, but that's never really a consideration in the low-end range anyway. Still, this is totally beside the point of this thread.
 
Acert93 said:
To contrast what Guden has said, while there are issues that need to be accounted for, part of planning and cost reduction is taking these things into consideration. Most consoles, at some point, do consolidate memory. While larger memory modules are more expensive initially, having fewer will save money in the long run. This stuff is usually part of the console design process. MS *really* screwed up here last time and everything we have heard from them is the console, top to bottom, is designed to drop in price over time. Eventually MS is going to want the Core to reach $99-$129 and buying fewer and smaller memory chips is part of that process. It is safe to say that at some point they will transition to denser memory modules. When is really depending on price and when they plan to do board revisions.


Thank you Acert - I knew the 360 was designed with the lessons learned from xbox1 in regards to cost savings but I wasn't sure if there was some serious technical hurdle to overcome in including ram chips in that cost reduction equation. Thanks for ironing that issue out. :)
 
Guden Oden said:
Crossbar said:
Yes, but wouldn't the GPU be even more prone to read/write large blocks of consecutive memory?
I don't think so. Considering how many different streams to memory a GPU is handling - framebuffer, render-to-texture and Z-buffer read/writes, texture reads (often multiple per pixel, with many texels read at a time for certain filter modes), shader program reads, command list reads and so on - large block accesses isn't the first thing that enters my mind. :p Oh, and take all this x2 for the two quads in xenos as well. :)
After consulting some people (more knowledgeble than me in the GPU field), I stand my point that the GPUs in general are more prone to access large chunks of consecutive memory than CPUs. Even if you are correct in that the GPUs access all these kind of different data types, the GPUs have deep pipelines and try to organise memory accesses in large blocks which efficiently use the burst access mode of the memory. Block sizes of 32 bytes and larger are not uncommon.

BTW I found this nice summary of the current state of graphics memory:

"Graphics and high-speed data networks require higher data-transfer speed. Special SDRAM implementations can deliver this higher bandwidth by reducing access time and system delays. These implementations include the graphics DRAM (GDRAM) and the reduced-latency DRAM (RLDRAM), also known as the network DRAM.

The GDDR memories provide higher data bandwidths for graphics engines by employing 16- or 32-bit wide data buses and tight timing margins. Over the last few years, though, the graphics bus width jumped from 64 bits to 128 bits, and now it is at 256 bits.

These wide buses require wide datapaths on the graphics DRAMs to minimize the number of memory chips used on the graphics card. If 8-bit wide memories are used, a 256-bit wide bus would need 32 chips. Obviously, half that number of chips would be required for 16-bit wide memories, and half again for 32-bit wide datapaths.

Because the high-end graphics card's memory typically maxes out at 256 or 512 Mbytes, multiple 4-Mword by 32-bit or 8-Mword by 32-bit GDDR2 or GDDR3 memory devices more than meet the density requirements. Such wide buses deliver the aggregate bandwidth to the graphics engine.

GDDR memory is similar to DDR memory in operation. But GDDR chips usually can offer tighter margins, because bus lengths and loading are more tightly controlled than that of the main computer memory. Also, memory vendors modified the internal DRAM architecture to better optimize the chips for graphics applications.

Some DRAM vendors offer various generations of GDDR memories (GDDR, GDDR2, and GDDR3). Going in another direction, however, Toshiba and Samsung are sampling a high-speed memory based on Rambus' extreme-data-rate (XDR) interface. Rather than use the DRAM's traditional bus architecture, the XDR interface is more of a point-to-point approach. Each XDR memory chip delivers data to the controller over its own datapath.

The separate datapaths eliminate loading issues. Differential signaling permits the XDR interface to use very small signal swings (200 mV) that deliver eight bits per clock cycle. As a result, these memories offer at least double the data-transfer bandwidth per pin versus the fastest GDDR3 graphics RAMs--3.2 Gbits/s versus 1.6 Gbits/s for the 800-MHz GDDR3 speed grade. XDR memory suppliers expect to increase the speed to 4.8 Gbits/s by late this year and eventually up to 9.6 Gbits/s per data pin by 2008
."
http://www.elecdesign.com/Articles/Print.cfm?AD=1&ArticleID=10095
 
Back
Top