Are we any closer?

Ty

Roberta E. Lee
Veteran
So shortly after the Voodoo 1 debutted, we were told that traditional renderers were on their last legs (memory bandwidth, blah blah blah).

As I've fallen behind on the latest and greatest, I was wondering if traditionals have come closer to this wall or if in fact, with new techniques and technology, it's even less of a concern. I mean, does it matter more or less nowadays?
 
1) Are we closer to what?

2) A square/rectangular room normally has 6 "walls" or boundaries... which wall do you mean? 8)

3) What do you mean by "traditional renderers"?

4) Damn, I must be thick... does what matter more or less "nowadays" ?
 
Technically, four walls, a ceiling, and a floor. :p

Do you mean IMR vs. TBDR? Because current "IMRs" have adopted some TBDR techniques, IIRC.
 
If, as I think, you are asking "Are the perceived advantages of tile-based rendering architectures vs current NV/ATI architectures relatively more or less important today than they were at the time of Voodoo 1", then so far as I can tell, relatively less advantageous today than they were then. In part because of new techniques incorporated into the current architectures since then, and in part because of the huge and growing investment in the existing architectures increasingly raising the dollar/resource cost to start from scratch with a TBR that could be competitive at the high-end.

So far as I can tell, the last *really serious* bandwidth "crisis" with the dominant architectures was the original Geforce SDR. Which is not to say we don't pine for more bandwidth at times --but I don't recall anytime lately anyone suggesting we could have cards that were, say, 50% faster if only we had more bandwidth.

But all I know I got here. :)
 
Ah, so perhaps he was talking about TBR. In that case, and in every other case, fillrate limitations is always a concern. Especially so with programmability kinda overriding bandwidth. And I assume we're not talking about AA and/or AF.
 
It has long been the view of NVIDIA that due to stream processing, bandwidth will not be a major concern. ie. Shader programs will reach such complexity such that the GPU is compute bound rather than I/O bound.
 
I think a lot of the opinions expressed on this topic on b3d back in '99-'00 would look quite funny today, at least that's what my memory tells me. (Un)fortunately, I think most of them were on the older incarnation of the forums :)

You don't see that much talk about the "bandwidth wall" today, so in anwer to
I mean, does it matter more or less nowadays?
I would say less.
 
Well any fillrate or bandwidth savings would still give u a huge lead on other companys
 
jvd said:
Well any fillrate or bandwidth savings would still give u a huge lead on other companys

Assuming that the gates you use to achieve it, don't end up reducing the gates you can use elsewhere to the point that you can't compete.

There is no such thing as a free ride on this.

The death of the conventional renderer is a lot like the death of X86 long predicted and probably a long ways off.
 
Erp i don't think traditional renderers are going anywhere. But i believe that a deffered tile renderer can live with them .

From the last example of a power vr card we have it used slower ram and less transistors along with a lower memory bus and competed well with the geforce 2 line up of cards . Only thing it was really missing was hardware tnl

edit i was wrong on memory bus size but here are figures.

stg4800 180nm tech 15m transistors it was 2x2 at 200mhz and used sdr ram .


Thats 10 million transistors less than the geforce 2 gts and it uses sdr instead of ddr and in 80-90% of games can keep up or surpase the geforce 2 gts and in some even beat the geforce 2 ultra .
 
Yeah, but ATI/NV don't have an overpowering incentive to go that route.

A new entrant would have a pretty big wall to get over on *other* things like shader hardware & driver quality. It seems to me that the investment required for a new entrant to be performance competitive with ATI/NV gets more prohibitive all the time --the Sony/NV deal is in part a testament to that fact.
 
Ty said:
So shortly after the Voodoo 1 debutted, we were told that traditional renderers were on their last legs (memory bandwidth, blah blah blah).

As I've fallen behind on the latest and greatest, I was wondering if traditionals have come closer to this wall or if in fact, with new techniques and technology, it's even less of a concern. I mean, does it matter more or less nowadays?

Have you looked at the price of some of the high end graphics cards lately?
 
geo said:
So far as I can tell, the last *really serious* bandwidth "crisis" with the dominant architectures was the original Geforce SDR. Which is not to say we don't pine for more bandwidth at times --but I don't recall anytime lately anyone suggesting we could have cards that were, say, 50% faster if only we had more bandwidth.

Yeah but there aren't any emerging memory technology leaps that any vendors plan to include in a souped up version of their card right now.

They know what their memory limit is, so wy bother spending loads of cash making the core far and away too fast for its RAM interface? you can just design it within means now and get it done faster and by the time memory is available to go 20% faster you optimise the core to get that extra speed too. 20% is a large amount to go up at the moment GDDR3 was nice but its still not the same as the introduction of DDR on a 128 bit wide bus.
 
Dave B(TotalVR) said:
geo said:
So far as I can tell, the last *really serious* bandwidth "crisis" with the dominant architectures was the original Geforce SDR. Which is not to say we don't pine for more bandwidth at times --but I don't recall anytime lately anyone suggesting we could have cards that were, say, 50% faster if only we had more bandwidth.

Yeah but there aren't any emerging memory technology leaps that any vendors plan to include in a souped up version of their card right now.

They know what their memory limit is, so wy bother spending loads of cash making the core far and away too fast for its RAM interface? you can just design it within means now and get it done faster and by the time memory is available to go 20% faster you optimise the core to get that extra speed too. 20% is a large amount to go up at the moment GDDR3 was nice but its still not the same as the introduction of DDR on a 128 bit wide bus.

Last time around NV *quadrupled* their pipes, and ATI *doubled* theirs without any particular strenuous monkeying around with memory interfaces. This would not be evidence of anything approaching a crisis. No way 20% would be enuf incentive for them to ditch their investments in their current architectures for a TBR --it would take something like the "bandwidth wall" fear (which didn't materialize) above to do that. I used 50% in my post as an example, but I'm not sure that would even be enuf incentive. And that assumes other solutions are just impractical (like, say, doubling bus width again to 512).

Add record die sizes and exotically rare yields at the current clock speeds, and then tell me again how they could easily soup up the rest of the architecture if only they had more bandwidth to fill.

Possibly the FX was seriously under-bandwidth'ed, but that was miscalculation rather than technological. I recall an interview with Kirk pre-release where he said there was no need to move off the 128bit bus yet. Even Matrox had had 256bit bus for many months before FX hit the streets, and NV quickly put it on the refresh.
 
Well bear in mind that increasing your bus width is a game of diminishing returns (well on a traditional anyway) ;)


But the bandwidth 'wall' isn't a wall as such, its a price hike.

£400+ for a top of the range card says we are running into problems, not just with bandwidth though, die size like you say. PowerVR also benifits in this area too, being pretty much 100% efficient. Couldn't tell you for sure how efficient a 6800, for example, is but im guessing about 40%.

Not only that, power requirements too. Dual molex power connectors on your AGP cards says thats a problem too. I couldn't use an ultra in my system, in fact I had to get a molex splitter to be able to use my 6800 normal.
 
Dave B(TotalVR) said:
Well bear in mind that increasing your bus width is a game of diminishing returns (well on a traditional anyway) ;)


But the bandwidth 'wall' isn't a wall as such, its a price hike.

£400+ for a top of the range card says we are running into problems, not just with bandwidth though, die size like you say. PowerVR also benifits in this area too, being pretty much 100% efficient. Couldn't tell you for sure how efficient a 6800, for example, is but im guessing about 40%.

Not only that, power requirements too. Dual molex power connectors on your AGP cards says thats a problem too. I couldn't use an ultra in my system, in fact I had to get a molex splitter to be able to use my 6800 normal.

Cool. Where can I buy a PVR card to rival GF6/X8? Me wants.
 
nutball said:
Dave B(TotalVR) said:
Well bear in mind that increasing your bus width is a game of diminishing returns (well on a traditional anyway) ;)


But the bandwidth 'wall' isn't a wall as such, its a price hike.

£400+ for a top of the range card says we are running into problems, not just with bandwidth though, die size like you say. PowerVR also benifits in this area too, being pretty much 100% efficient. Couldn't tell you for sure how efficient a 6800, for example, is but im guessing about 40%.

Not only that, power requirements too. Dual molex power connectors on your AGP cards says thats a problem too. I couldn't use an ultra in my system, in fact I had to get a molex splitter to be able to use my 6800 normal.

Cool. Where can I buy a PVR card to rival GF6/X8? Me wants.
You're asking the wrong question
The correct question is "When will we be able to buy a PVR card that rivels todays high end DX9 cards".
 
Reverend said:
The correct question is "When will we be able to buy another PVR card?".

Well, apparently no major company has enuf vision and greed to offer a performance competitive card at a higher profit margin than NV/ATI. Curious, that.
 
or... Who/what will make the chips in volume? What /when kinda RAM and when would they get it? The chip makers are full , the ram is maxed out too.
 
Back
Top