Will the next round of GPU/gfx card announcements be DX10 capable?

Guden Oden

Senior Member
Legend
It's been a long boring while since anything major happened on the GPU front, and it's been taking longer and longer for a good while now. No new architecture's been released basically since the 6800 was shown for the first time (ATi's response came a little later as I recall)...

So when something DOES happen, will it be something major, something new, or will it just be another evolution of what we already got?

Also, when will this be? August, or at least end-ish of summer, has traditionally been a period when we can expect rumblings to start to happen from our two big players, but this year, nuttin! Will it be february then (the other traditional month) until we hear any DX10 news, or can we expect a final round of DX9 cards during autumn first?

What say ye? :)
 
Rumours are end of year(ish) or early 2007 for G80 and R600, with some level of DX10 support. I think G80 may be a more traditional design, and R600 will go all the way with a unified architechture.

Both companies are playing it very close to their chests as they have a lot of leway due to the Vista slip.
 
Plus, let me add a question at this topic if u don't mind, it has been what 7 years(?) since the first 256bit gpu by nvidia hit the market, 7 years is a long time in the PC industry and they'r still stuck to 256bit gpu's, why? Is it memoryBW limitations?
If i recall matrox launched some pharphellia 512bit gpu around 2 or 3 years ago but its performance wasnt impressive at the time if memory bandwidth was the problem then, wouldn't GDDR4 be up to the 512bit bus now? If not, would at least a 384bit one?
What's stopping them? Memory BW or just costs?
 
The 'bitness' of a GPU is essentially a marketing term but you're confusing it with memory interface width. Matrox called the Parhelia a 512-bit GPU because it had 4 128-bit vertex shader units.
 
I wrote that thinking about the memory interface yes, but thanks for clearing that up about the Pharphellia, i though that it had 512bit of memory interface! (So now i realize that the first 256bit gpu from nvidia is only at the internal bus and didnt applied to the memory interface, correct me if im wrong to :p)
 
IIRC, the main problem with enlarging the memory bus is cost, because a larger bus means more pins on the GPU, as well as more traces on the boards, making the PCB cost very significant. Going DDR/DDR2/GDDR3 and now GDDR4 is another way to improve memory bandwidth.

After that, GPU manufacturers also work to improve actual bandwidth with bandwidth saving measures, such as early triangle reject, various forms of compression...
 
I wrote that thinking about the memory interface yes, but thanks for clearing that up about the Pharphellia, i though that it had 512bit of memory interface! (So now i realize that the first 256bit gpu from nvidia is only at the internal bus and didnt applied to the memory interface, correct me if im wrong to :p)

Yes, IIRC, Nvidia used a 2x128 bit crossbar. ATI brought out the first 256bit interface with R300, and their new memory controller uses two of them to give a 512bit interface in the same "2x" way (although it operates a ring bus rather than a crossbar). Soon after R300, Nvidia went for a true 256 bit interface even though they'd previously been calling it unnecessary.

256 seems to be the current sweet spot on memory cards insofar as cost vs pins vs memory speed/bandwidth. 512 just doesn't look very viable on discrete chips. Faster memory, DDR4, increasing utilization of bandwidth looks to be the way they are going before thinking about increasing the bitness of the buses.
 
Last edited by a moderator:
Yes, IIRC, Nvidia used a 2x128 bit crossbar. ATI brought out the first 256bit interface with R300, and their new memory controller uses two of them to give a 512bit interface in the same "2x" way (although it operates a ring bus rather than a crossbar). Soon after R300, Nvidia went for a true 256 bit interface even though they'd previously been calling it unnecessary.

256 seems to be the current sweet spot on memory cards insofar as cost vs pins vs memory speed/bandwidth. 512 just doesn't look very viable on discrete chips. Faster memory, DDR4, increasing utilization of bandwidth looks to be the way they are going before thinking about increasing the bitness of the buses.

You are confusing things.

Geforce 256 had a single 128bit bus, not a "2x128 bit" bus.
Nvidia's first 256bit GPU bus was the GeforceFX 5900 family.

Just the same, there are currently no cards on the market with 512bit wide memory buses to a single GPU (the cost of such a thing would be astronomical right now, not counting the trace routing nightmare).
Crossfire and SLI setups do not apply because there's not a single, unified memory bank for both (2 or 4) GPU's.
The same for "internal bitness".
To me, all that really counts is the external path to the actual memory banks.


Frankly, i don't see the need for wider buses so soon, as memory speeds are making up for it, and they are much more economical than the former method of bandwidth scaling.
 
Last edited by a moderator:
I wrote that thinking about the memory interface yes, but thanks for clearing that up about the Pharphellia, i though that it had 512bit of memory interface! (So now i realize that the first 256bit gpu from nvidia is only at the internal bus and didnt applied to the memory interface, correct me if im wrong to :p)

The 256 in the the Geforce 256 didn't apply to the memory interface, it was an 128bit interface. You are correct in that it was a marketing tool. Where the 256 came from is quite arcane and I've seen several interpretations e.g. 128bit interface + 32 bit colour + 32 bit z + 64 from somewhere!
 
Plus, let me add a question at this topic if u don't mind, it has been what 7 years(?) since the first 256bit gpu by nvidia hit the market, 7 years is a long time in the PC industry and they'r still stuck to 256bit gpu's, why?

Nvidia introduced their first GPUs with 256-bit wide memory bus a little over 3 years ago, in mid-2003,
with the NV35 / GeForce FX 5900. Prior to the NV35 / FX 5900, Nvidia had 128-bit memory bus on all their GPUs (except maybe very lowend parts) since the NV3 / Riva 128, introduced 1997, if I am not mistaken.

Is it memoryBW limitations?
If i recall matrox launched some pharphellia 512bit gpu around 2 or 3 years ago but its performance wasnt impressive at the time if memory bandwidth was the problem then, wouldn't GDDR4 be up to the 512bit bus now? If not, would at least a 384bit one?
What's stopping them? Memory BW or just costs?

the Matrox Parhelia was the first consumer grade GPU with a 256-bit memory bus (mid 2002), Parhelia did not have a 512-bit bus, and to this day, no known GPU has a 512-bit bus, at least not anything that has been seen or that is on the market. IIRC the next GPU to have a 256-bit bus was 3DLabs P10, also in 2002. Then the ATI R300 / Radeon 9700 Pro was the first highend, DX9 GPU with a 256-bit bus that was good for gaming. then finally, Nvidia caught up with the consumer GPU industry which had gone to to 256-bit buses, in mid 2003 with NV35 / FX 5900.

Nvidia had counted on GDDR2 with NV30 / FX 5800 to make up for having a 128-bit bus.


a true 512-bit memory bus would be very costly in a consumer GPU right now. I don't expect to see 512-bit bus until ATI R800 and Nvidia G100 / NV60 in 2-3 years.
 
Last edited by a moderator:
The 256 in the the Geforce 256 didn't apply to the memory interface, it was an 128bit interface. You are correct in that it was a marketing tool. Where the 256 came from is quite arcane and I've seen several interpretations e.g. 128bit interface + 32 bit colour + 32 bit z + 64 from somewhere!

The G256 had an internal memory path of 256bit (to match the external DDR 128bit interface).
 
a true 512-bit memory bus would be very costly in a consumer GPU right now. I don't expect to see 512-bit bus until ATI R800 and Nvidia G100 / NV60 in 2-3 years.
*whistles*

Anyway, didn't Jen-Hsun say that we were going to see more G7x chips this year in the last CC? Plus, there are the 7950 slides. So, I think R580+/7950 (whatever that is) are the last hurrah of SM3.0, and after that, we are all DX10 all the way.
 
The 'bitness' of a GPU is essentially a marketing term but you're confusing it with memory interface width. Matrox called the Parhelia a 512-bit GPU because it had 4 128-bit vertex shader units.
It was called Parhelia-512 for multiple reasons and one of them was 512 bits of internal memory buses. This is nothing special now as all chips with external 256 bit DDR buses have 512 bits internal. Also, MegaDrive is correct about the timeline. Matrox, 3dlabs, and Ati all came out with 256 bit interfaces around the same time, but it was in that order. Of course being first doesn't matter as Ati is the only one of the 3 still competing.
 
First GPU with 128 bit memory : 3dfx Banshee/ Nvidia NV4 introduced in 1998
First GPU with 256 bit memory : Matrox Parhelia/ 3DLabs P10/ ATI R300 introduced in 2002

(Nvidia used 256bit memory first time in NV35 in 2003)


My bet 512bit memory will be used in graphic cards in 2-3 years. GDDR4 can go up to 3.2GHz but over that speed there is nothing yet. Memory cannot be speeded up forever so I think 512bit will be a necessity in 3 years.
 
What if they go serial? Is it possible to use serial interfaces for GPUs? There's a load of latency hiding mechanisms in GPUs for the super long pipes already, shouldn't it be possible to use similar mechanisms to cover up the latencies of a serial interface?
 
The problem I see with going serial is GPUs are already pushing the limits of memory clocks. Unless someone develops a memory that runs faster going wider will be a popular option.
 
First GPU with 128 bit memory : 3dfx Banshee/ Nvidia NV4 introduced in 1998
First GPU with 256 bit memory : Matrox Parhelia/ 3DLabs P10/ ATI R300 introduced in 2002

(Nvidia used 256bit memory first time in NV35 in 2003).

I believe the Riva128 had a 128 bit memory interface. I remember buying one of these when they first came out. I believe the other accelerators at the time were 64 bit. But I am getting up there in years, so I could be wrong about that.
 
Pfft everyone ignores the old Voodoos. Voodoo 1 had 2x 64 Bit busses and Voodoo 2 had 3x 64 Bit busses. At the time it was questioned whether the Voodoo 3 with a single 128 Bit bus would be able to maintain the same rendering efficency as a Voodoo 2. The result, i honestly can not remember.
 
Last edited by a moderator:
Well, to answer Guden's question directly: no. At least not if you consider "the next round" as a literal question, as it appears that NV and ATI have another round of mid/low DX9 cards coming in the next few weeks.

Now, next high-end, meaning next card to feature higher performance than either GX2 or X1950? There the answer is that would be my expectation as of today, and it appears it will be G80. I've rolled the dice as early/mid November for that, tho others have pointed at October --and we're all guessing. :)
 
Back
Top