PC-Engine said:Having a higher clock speed isn't necessarily an advantage. If memory A offers the same bandwidth as memory B but requires a doubling of clock frequency, A is not better than B.
I've read that, on the contrary, RDRAM is supposed to have very low latency, in part due to the high clockrate.PC-Engine said:However I do know for a fact that Nintendo chose RDRAM for the N64 because of costs. It needed fewer PCB layers. Too bad they didn't consider latency. Don't know what the situation is with XDR though.
notAFanB said:while that's all well and good for PC space, for an fixed console I cannot see the advatages if the perforemence ( of both parts at the time) are near enough Identical.
what would be the likely cost reduction over the lifetime of PS3? (including the initial price of purchase?)
How does does this compare with the projected figures for GDDR3 over the same period?
Again, isn't that only how it's implemented on the Intel boards?akira888 said:The largest issue with RDRAM is that each RIMM is connected serially with each other unlike DDR-SDRAM in which the DIMMs are arranged in a grid formation. The 16 bit data signal therefore has to travel through each RIMM (or placeholder stick in case of an empty slot), and the "furthest" RIMM from the memory controller will determine the latency.
Squeak said:I've read that, on the contrary, RDRAM is supposed to have very low latency, in part due to the high clockrate.PC-Engine said:However I do know for a fact that Nintendo chose RDRAM for the N64 because of costs. It needed fewer PCB layers. Too bad they didn't consider latency. Don't know what the situation is with XDR though.
It might be the unfortunate implentation by Intel some years ago, that has people confused?
Simon F said:My understanding always was that RDRAM's latency was high but that it had very high data transfer rates. I could be wrong though.
The way I refer to it wasn’t helping you.Guden Oden said:David_South#1 said:A Unix/Linux/BSD variant will definitely be the OS.
A Hurd OS is the OS I’ve been using as a template.
This is in line with previous speculation, now the forum doesn't feel lost and confused anymore. Thanks for putting us back on track!
To those of us that - eeheh *coughs* unlike me - maybe aren't fully up to speed on this - what is a Hurd OS anyway?
it's compatible
The Hurd provides a familiar programming and user environment. For all intents and purposes, the Hurd is a modern Unix-like kernel. The Hurd uses the GNU C Library, whose development closely tracks standards such as ANSI/ISO, BSD, POSIX, Single Unix, SVID, and X/Open.
it's built to survive
Unlike other popular kernel software, the Hurd has an object-oriented structure that allows it to evolve without compromising its design. This structure will help the Hurd undergo major redesign and modifications without having to be entirely rewritten.
it's scalable
The Hurd implementation is aggressively multithreaded so that it runs efficiently on both single processors and symmetric multiprocessors. The Hurd interfaces are designed to allow transparent network clusters (collectives), although this feature has not yet been implemented.
it's extensible
The Hurd is an attractive platform for learning how to become a kernel hacker or for implementing new ideas in kernel technology. Every part of the system is designed to be modified and extended.
it's stable
It is possible to develop and test new Hurd kernel components without rebooting the machine (not even accidentally). Running your own kernel components doesn't interfere with other users, and so no special system privileges are required. The mechanism for kernel extensions is secure by design: it is impossible to impose your changes upon other users unless they authorize them or you are the system administrator.
it exists
The Hurd is real software that works Right Now. It is not a research project or a proposal. You don't have to wait at all before you can start using and developing it.
GNU Mach is the microkernel of the GNU system. A microkernel provides only a limited functionality, just enough abstraction on top of the hardware to run the rest of the operating system in user space. The GNU Hurd servers and the GNU C library implement the POSIX compatible base of the GNU system on top of the microkernel architecture provided by Mach.
Mach, is particularly well suited for SMP and network cluster techniques. Thread support is provided at the kernel level, and the kernel itself takes advantage of that. Network transparency at the IPC level makes resources of the system available across machine boundaries (with NORMA IPC, currently not available in GNU Mach).
Guden Oden said:Luckily, the console isn't launching tomorrow. By the time it does launch, XDR will likely be considerably faster than it is now (well, ISN'T, really, but we have paper specs ). Considering it uses differential signaling and all that, it should be able to go faster using 256 data pins (half data, half data inversed), than what GDDR3 manages with 256 data pins.
Key to XDR DRAM's draw is its high bandwidth per pin. Using a 400MHz clock, XDR can transmit eight data bits per clock (octal data rate) attaining a 3.2GHz/pin data rate. An 8-bit interface can transfer 3.2GBs/sec, and a 32-bit interface hits 12.8GB/sec. Tack two 32-bit "XDIMMS" together and you reach 25.6GB/sec at 3.2GHz signaling. Rambus expects to quickly move to 6.4GHz/pin signaling, so if you expand to a 128-bit interface at 6.4GHz/pin, you can reach 102.4GB/sec. This is far beyond speeds on DDR roadmaps. Rambus attains such speeds using three key technologies: Differential Rambus Signaling Levels (DRSL), FlexPhase technology (to compensate for timing errors), and octal data rate signaling mentioned previously. You can check out Rambus XDR info on their site for technical details. http://www.rambus.com/products/xdr/
http://www.extremetech.com/print_article/0,1583,a=119135,00.asp
PC-Engine said:Simon F said:My understanding always was that RDRAM's latency was high but that it had very high data transfer rates. I could be wrong though.
That was my understanding too. Good for streaming data but not good for random access.
The Alpha EV7 integrated (multiple) RDRAM controllers on the die and has a load-to-use latency in the 75 ns range (ie. faster than SDRAM).
PC-Engine said:The Alpha EV7 integrated (multiple) RDRAM controllers on the die and has a load-to-use latency in the 75 ns range (ie. faster than SDRAM).
Faster than DDR1 at the time?
David_South#1 said:In the way of XDR it was my impression that 32datapins/16bits was as many as they would have implemented by this time.
But you need two 32bit XDR-DIMM at 3.2GHz to reach 25.6GB/sec.
Rambus still has to achieve 6.4GHz or quaduple (=64bit) the bit count to reach 50GB/sec.
Vince said:arjan de lumens said:XDR uses differential pairs at 3.2Gbps per pair, which gives the same per-pin bandwidth as (non-differential) 1.6 Gbps GDDR3 memory. No gain there.
Uhh, please correct me if I'm wrong, but it was my understanding that with normal single-ended signaling, you'll need a ground pin for every data pin.
Thus, for every pair of pins on the package, you'll yeild 1.6Gbps with GDDR3, but with first generation XDR, you'll yeild 3.2Gbps. And XDR is slated to scale from it's current 400mhz to 800mhz (6.4GHz) within 2 years.
Basic said:Vince said:Uhh, please correct me if I'm wrong, ...
You are.
There's no special "signal ground" lines in GDDR3.
IO signals are relative the power ground, and I don't think there's any extra power ground signals compared to XDR.
Vince said:Basic said:You are.
There's no special "signal ground" lines in GDDR3.
IO signals are relative the power ground, and I don't think there's any extra power ground signals compared to XDR.
Interesting, then how exactly does GDDR3 get around the noise problems that are related with unbalanced transmission? Where's a good place for more specific information? I remember hearing that in single-ended signaling, your ratio of active pins to power and those to ground will converge on 1:1 - obviously, you can get around this with XDRs DRSL which should provide you with 2X the mean, per pin, transmission rate as the ratio nears 1:1, but I've never read up on any of the DDR2/GDDR3 specs.
MfA said:You can isolate signals with by putting grounded traces in between without having extra grounding pins ... power-consumption and signal integrity are seperate issues.
Samsung's GDDR3 chips have 14 power and 18 ground pins which are designated for output, it seems to have seperate power grids for the memory and the I/O, for 32 I/O pins.