A thought on X-Box 2 and Creative Labs.

arjan makes good poionts above. Longer bursts provide much higher bus utilization. Unless you only rarely use contigous data, longer bursts are usually a good idea, from a bus efficiency point of view.

For less technical readers, I'd like to point out there is a problem that kicks in earlier than passing the point of optimum bus utilization. Caches. Bursting in extra data is very cheap in terms of bus cycles in and of itself, but it is expensive in that you end up using a larger portion of your caches for storing data nobody wanted. Furthermore, the data that is replaced has to be written back to memory, which costs bus-cycles. Thus, the break-even point for memory bus read burst length is much shorter in "real life" than what would be apparent from only looking at protocol efficiency, because longer bursts also lower cache efficiency and increases cache logic demands on bus cycles.

The trade-off is clearly application and GPU architecture dependent. It is obviously impossible for an interested bystander like me to say what would be an optimum. I would predict that it isn't terribly sharp though, in other words that the finer points only make small differences

Edit: Small addendum. You can increase cache efficiency by tagging data that you know shouldn't be cached, or by manually locking and releasing data in the cache. Would be workeable in for driver writers.

Entropy
 
Basic said:
The memory that fits best to "four bits of data per pin and clock cycle" is Kentron Techonologies' QBM. But that's not a memory chip interface, but a way to connect two DDR SDRAM chips with an external switch, and double the datarate per pin that way. But don't expect this method to push the highend datarate. It's more of a way to get rather high bandwidth out of cheap components than to get realy high bandwidth. I don't expect to see QBM compete with highend DDR, and certainly not with highend DDRII. Read more at www.kentrontech.com

I think you are a bit too negative. I'll quote you on the critical part again.

"It's more of a way to get rather high bandwidth out of cheap components...."
Now if that doesn't sound pretty damn useful, I have understood nothing about the biz.

The biz is also conservative though. I thought that 256-bit buses would come before any new memory technology, and that turned out to be a correct guess. But I most certainly wouldn't rule QBM out completely. If I were to build a card that needed a lot of memory with a lot of bandwidth, i.e. the memory would need to be big and fast, I would definitely look into QBM. And maybe reject it. But maybe not, since it most obviously _would_ be useful for such an application, not better than going 128 -> 256 bit wide perhaps, but quite possibly more appealing than going 256 -> 512.

Plus, it makes for a nice technological cycle. :)
64-bit SDR -> 128-bit SDR -> 128-bit DDR -> 256-bit DDR -> 256-bit QBM

(Too bad the QDR name was appropriated for memory technology which doesn't fit the technological pattern.)

There are several alternatives to the last step in the pattern above, but I wouldn't bet on cheap bandwidth going out of fashion all that soon. IF it is cheap enough, and not too difficult to incorporate into the design compared to alternative approaches, it may well be used.
In this forum, it seems that many are of the opininion that 128-bit DDR at 200+ MHz is some kind of high-end interface today. It's not. It is used on sub-$60 cards for crissakes. What has been limiting new memory approaches is that they have to fit a very tight pricing envelope for a manufacturer to sell a design in the volume segment. DDR didn't quite fit there when it came, but soon did. The same for 256-bit buses now. The next step is up for grabs. But CHEAP is at the very top of the list for importance.

Entropy
 
arjan de lumens said:
One issue with DDR-I vs DDR-II memory: Write-to-read bus turnaround is very slow in DDR-I, something like 1 clock + one full CAS latency (~4 cycles) during which the memory bus just sits idle. The DDR-II protocol has a fix for this problem, reducing bus turnaround to only 1 clock either way, potentially increasing protocol efficiency.

And DDR-II would not have twice the latency of DDR-I either. For the most part, each one of the memory latencies in DDR-II (RAS, CAS, precharge) will be about 1 clock longer than in DDR-I at the same clock speed, in order to align accesses to the half-speed core clock. In higher-speed DDR-I's (>250 MHz), these latencies are already at something like 4-6 clocks each , so the latency hit would be something like 20-25%.

Also, with DDR-I it is difficult to achieve good protocol efficiency with bursts shorter than 4 elements, so the burst-length-4 limitation of DDR-II may not be detrimental to performance at all.

So DDRII has some advantages but the burst length 4 has some performance impact. See nvidia LMA, they have reads size 256, 128 and 64bits.

Looks like LMA do always a burst 2. If you need a texture then the LMA do a 256bits read (burst 2). Now it will have to do a 512bits read (128bits burst 4), or maybe do a 256bits texture read + 256 framebuffer read.

I hope you are right and that the protocol eficiency improvment will be enough to compensate the longer burst delay.
 
A rumour that Microsoft will build their own chips for the X-Box 2.

Intel, NVidia, ATI... out of the picture?

Recent news stories report that Microsoft may design their own chips for the Xbox 2 design. This means that the manufacturers of the current Xbox console may be out of the picture. For the Xbox 1, NVidia had won the bid to supply the graphics chips for the Xbox, and Intel won the bidding war against the CPU manufacturer AMD.


NVidia and MS... the end of a beautiful relationship

Nvidia's share price rose after they reported doing quite well even though worldwide Microsoft Xbox sales were below expection. Insiders report that Microsoft didn't like Nvidia's refusal to lower the cost of their chip set, and apparently Microsoft is getting ready for legal action against Nvidia, based on a breach of agreement between the two companies.


XBox 2 is a DIY for Microsoft

The main reason for this move is of course the cost for Microsoft for the use of these components in the Xbox, which are largely responsible for the high price of the console, making it extra-hard for MS to offer the console at a reduced price. The surprising news is that Microsoft aims not only the develop and produce their own graphics chips, but also the main CPU, by which Microsoft plans to shake off its dependency on X86 suppliers.

Sources say that Microsoft has posted a request for quotation for a DirectX9/10 microcode engine, the part of the chip that decodes and processes DirectX instructions.


Rather receive than pay license fees

Microsoft's move into chip development and production was foreshadowed by their recent acquisition of Silicon Graphics patents. This means that they can implement many features into their chips that otherwise would require licensing, AND it means that other chip manufacturers like ATI and NVidia will have to start paying Microsoft license fees.


MSIL all the way

The CPU for the Xbox 2 is rumoured to be designed to be a processor that can decode and execute instructions in Microsoft Intermediate Language (MSIL) while at the same time being able to run x86 code, to remain compatible with Xbox 1.

URL Source



As far as the license fees is concerned, I have no idea if they could charge companies like Nvidia for the technology.
 
I see MS taking more control over the design and production of XBox2 compared to XBox1 as a very likely step. With the current involvement of Intel and NVidia it will be very hard for MS to compete with Sony price-wise. The selection of Intel and NVidia was a logical step at the time to get XBox out without delays. But a combination of licensed MIPS- (or possibly an x86-) and Gigapixel-cores, where MS was responsible for the actual production would have been a better long-term solution.

This time, with XBox2, MS may have the time to develop the system in-house. I do believe they will try to get it out on the market before PS3, which could mean we will see it as soon as Christmas 2004. The design could be well underway, making 2004 a possibility.

MS do have a lot of in-house experience in 3D graphics, and with their deep pockets they can basically "hire the experience" they are missing. Not sure about the CPU design though.....

They could of course license complete cores instead of creating their own architectures. MIPS? PowerVR? Bitboys XBA (don't laugh)?

Not sure how important backward compatibility with XBox1 is. But I guess they will learn from the PS->PS2 transition that it's not that important (or is it?).
 
CoolAsAMoose said:
I see MS taking more control over the design and production of XBox2 compared to XBox1 as a very likely step. With the current involvement of Intel and NVidia it will be very hard for MS to compete with Sony price-wise. The selection of Intel and NVidia was a logical step at the time to get XBox out without delays. But a combination of licensed MIPS- (or possibly an x86-) and Gigapixel-cores, where MS was responsible for the actual production would have been a better long-term solution.

This time, with XBox2, MS may have the time to develop the system in-house. I do believe they will try to get it out on the market before PS3, which could mean we will see it as soon as Christmas 2004. The design could be well underway, making 2004 a possibility.

MS do have a lot of in-house experience in 3D graphics, and with their deep pockets they can basically "hire the experience" they are missing. Not sure about the CPU design though.....

They could of course license complete cores instead of creating their own architectures. MIPS? PowerVR? Bitboys XBA (don't laugh)?

Not sure how important backward compatibility with XBox1 is. But I guess they will learn from the PS->PS2 transition that it's not that important (or is it?).

Well Nintendo has a very affordable console and they dealt with outside sources. I could see them going either way, I'm sure all options are on the table.

If Bit Boys got in on the X-Box 2 action, that would be so entertaining. Not as amusing if Microsoft used Suns MAJC CPU though :eek: . Could you imagine one of Microsoft most bitter enemies getting into bed with M$? It might be a sign the world is comming to an end if it happened.

I think backward compatibility is a good thing overall.
 
Back
Top