Will the next round of GPU/gfx card announcements be DX10 capable?

The problem I see with going serial is GPUs are already pushing the limits of memory clocks. Unless someone develops a memory that runs faster going wider will be a popular option.

Well, with serial, shouldn't things like dual, quad, eight, etc channels be more feasible? I don't seeany reason one couldn't just add more lanes.
 
Well, with serial, shouldn't things like dual, quad, eight, etc channels be more feasible? I don't seeany reason one couldn't just add more lanes.
I don't follow you. To me going serial means to having less pins and a higher clock in order to achieve the same bandwidth as a parallel interface. Where serial has an advantage is if it allows you to increase the clocks higher than the parallel interface. Then with more serial interfaces, but the same number of bits as the parallel interface, you can achieve a higher bandwith. What exactly do you mean by channels/lanes here?
 
I don't follow you. To me going serial means to having less pins and a higher clock in order to achieve the same bandwidth as a parallel interface. Where serial has an advantage is if it allows you to increase the clocks higher than the parallel interface. Then with more serial interfaces, but the same number of bits as the parallel interface, you can achieve a higher bandwith. What exactly do you mean by channels/lanes here?

Well, since one of the limitations of the memory bus on the graphics cards right now are the practically absurd trace routing that would be required for a 512bit memory interface, it seems that something like FB-DIMMs coul help alleviate that.

Of course, I don't really understand the implications so the objections you've mention (and other ones not mentioned) probably meant this was likely to have already been considered.


What I was thinking was that since clock domains are already separated in some chips, why not have only the memory interface at some godly speed (say a couple gigahertz) while the rest of the chip runs at whatever it needs to. The effect would be significantly fewer pins for similar bandwidth. At least in my imaginary world =/
 
Last edited by a moderator:
Yes, IIRC, Nvidia used a 2x128 bit crossbar. ATI brought out the first 256bit interface with R300, and their new memory controller uses two of them to give a 512bit interface in the same "2x" way (although it operates a ring bus rather than a crossbar). Soon after R300, Nvidia went for a true 256 bit interface even though they'd previously been calling it unnecessary.
Just fyi, nVidia has made use of a 4-way crossbar on all of its high-end designs since the GeForce4 (and, as previous posters have noted, didn't have 256-bit until the FX 5900).

The reason why the internal memory controller was wider was always because of DDR, not any sort of crossbar. Crossbar is a term used to allow for smaller, independent memory accesses to improve memory bandwidth usage efficiency.
 
I think it (the 4-way mem crossbar) was there since GeForce 3 when they first introduced Lightspeed Memory Architecture (LMA).
At least i remember some schematics of memory architecture with 4 "blocks" interconnected with GPU in all possible ways (plus each connected to eachother).
Though i'm not 100% sure...
 
First GPU with 128 bit memory : 3dfx Banshee/ Nvidia NV4 introduced in 1998

.. but the original voodoo graphics chipset had two independent 64-bit busses, so total bus width to memory was 128 bits.

and voodoo2 had 3 64-bit busses, total memory bus being 192-bit.
 
There's a difference, chavvdarrr. The Voodoo 1 and 2 graphics cards had completely separate memory for textures and the frame/z-buffers. They had, for example, one chip that had a 64-bit memory access to its own memory space for use with frame and z-buffers. There was another chip, on the Voodoo1, that had its own 64-bit memory bus to storage for texture memory. The Voodoo2 had two such chips.

The Voodoo1 and 2 chips, then, while they had a total of 128-bit and 192-bit memory busses, respectively, one cannot call their architectures the same as graphics boards that had just one major chip for graphics processing with 128-bit wide busses (or higher), such as the RIVA 128, RIVA TNT, and Voodoo3. Making use of a shared memory bus made these later chips quite a bit more efficient in terms of their memory bandwidth usage.
 
so with Voodoo Graphics ~ Voodoo 1, the PixelFX chip had a 64-bit bus and the TexelFX chip had its own 64-bit bus.

then with Voodoo 2, PixelFX2 had a 64-bit bus, and the two TexelFX2 chips each had their own 64-bit busses.

then Banshee had a single 128-bit bus. Banshee was a single chip design.

same with Voodoo3 (basicly Banshee2), a single chip with a single 128-bit bus.
 
Oh gosh now guys don't forget about Voodoo Rush! A crippled Voodoo Graphics attached to a half wit 2D chip! :)
 
Oh gosh now guys don't forget about Voodoo Rush! A crippled Voodoo Graphics attached to a half wit 2D chip! :)
The prime version used 2D/3D chip Alliance AT3D and 3Dfx chipset was only an upgrade for better 3D. Many AIBs gave up this idea and produced single-PCB version equipped with 2D-chip from Alliance (AT25) or Macronix (MX86251). The second one was known for it's poor drivers and 2D image quality issues.
 
Back
Top