Price of Graphics boards

Nick said:
How about this: move all pixel shader instructions involved in the calculation of 'oDepth' to the begin of the shader. As soon as 'mov oDpeth, ...' is executed you compare it to the z-buffer... no? That's how I migth optimize it with my emulator. But in hardware things will probably go different...
Yes, because how many cycles would you be waiting to get the result of your depth query?
 
Nick said:
Dave H said:
CPUs are more complex than GPUs.
Thanks for the confirmation, but I still have to admit I underestimated GPU architecture. ;)

Oh, sure, both are extremely complex beasts. Any time you've got that much money and experience chasing a complex problem you're bound to get brilliantly engineered results. Of course there's more money and experience chasing the general-purpose computing problem... (Whether general-purpose computing is a more complex problem than the interactive visual simulation of reality (or whatever) is a question I'm not sure I can answer.)

Personally I'm a bit more fascinated by GPU design, at this stage in my life. Possibly because I know less about it and haven't entirely wrapped my head around the low level hardware approaches to a complex but embarrassingly parallel problem where the main challenge is hiding latency. (In general-purpose computing, on the other hand, the main challenge is extracting parallelism from a serial instruction stream.)
 
Dave H said:
the main challenge is hiding latency
I add the word 'cheaply' - it's easy to hide latency if you can afford massive buffers! - and there's a lot of truth to that, but it's not quite the whole story.

The phrase I like to hear is 'a well-balanced architecture'. This implies that everywhere gets enough effort to do its job properly, without absorbing disproportionate effort in cost (time, money, etc.) compared to the rest. (From a performance point of view, as you say, this can be hugely generalised to keeping your pipeline flowing and getting the latency compensation right).

Now that's a real challenge. It's fascinating.
 
The only reason they are the inflated prices is because people keep buying them at these prices.

This must be one of the only products that has good competition that causes absolutely no drop in prices when a new card is introduced.

IF THEY CAN SELL AN X BOX WITH A GF3 CHIP IN IT FOR £80 WHERES THE LOGIC?
 
The X box is sold under cost, hoping to make that cost up by games purchases. Every console ever made operated under this premise. (Or at least those in recent history).
 
kkevin666 said:
IF THEY CAN SELL AN X BOX WITH A GF3 CHIP IN IT FOR £80 WHERES THE LOGIC?

Ummm... Because we don't make any money by selling games?

[edit - Should have read Russ's reply before posting...]
 
RussSchultz said:
The X box is sold under cost, hoping to make that cost up by games purchases. Every console ever made operated under this premise. (Or at least those in recent history).
As far as I know, Nintendo doesn't sell the Gamecube for a loss.
 
Pete said:
My amateur perspective: I still view the 3D market ATi and nVidia compete in as primarily a luxury market, whereas Intel's and AMD's business is really more vital. Also, ATi and nVidia are at the mercy of third parties to provide basically their whole product, from materials to manufacture, whereas Intel and AMD own their own fabs.

Think that strikes the nail a direct hit. It also explains why AMD and Intel can "freely" publish so much detail and the inner-working of their products versus graphic vendors guarding it like their "family jewels."
 
Back
Top