AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Hallelujah!

afresult.jpg

Can anyone explain to me what is this:?: And why is this such a big deal:?:

All I know about AF is that it used when you view your textures are at a very oblique angle.
 
Can anyone explain to me what is this:?: And why is this such a big deal:?:

All I know about AF is that it used when you view your textures are at a very oblique angle.
Have you ever suffered from plain obvious angle-depended and undersampled AF quality? Don't tell me you missed the whole R300~G71 time line. :LOL:
 
Of course, the real world implications will be variable. Since R520 (and G80 for that matter) the IQ difference lines are diminishing, regarding AF.

These days they're more determined by how much the drivers are 'optimizing' the texturefiltering, rather than what the hardware is actually capable of :)
 
Have you ever suffered from plain obvious angle-depended and undersampled AF quality?

I dont play games. ;)
Don't tell me you missed the whole R300~G71 time line. :LOL:
I missed every thing prior to G80. :p

I just wanna know what is the deal with AF, this pic/holy grail, and angle independence.
Any resources/tips/pointers welcome.
 
Well, to be honest, NV got (near) perfect AF implementation since G80 introduction, but R800 just hammered the last nail now. Both implementations are good enough for for the time being, though, nothing revolutionary, but it's good to finally close this chapter, regarding AF quality after so many years. ;)

p.s.: NV will probably follow suite with GT300, just not to fall behind regarding this "check-box" feature, for the PR sake.
 
Last edited by a moderator:
Meh, the new AF while technically a nice achievement won't mean anything for me in games. Couldn't tell the difference between R600 and G80 and I certainly won't be able to tell the difference between G80 and Rv870. :p

Regards,
SB
 
Well, to be honest, NV got (near) perfect AF implementation since G80 introduction, but R800 just hammered the last nail now.
There is no information about quality concerning flickering, which was the problem of filtering on Radeons since R600 even AI was turned off.
 
The angle dependency is much less of an issue ((And has been since the X1900XT/G80)) than the inclusion of mipmap and LoD filters. That need to just go away. I' wouldnt shed a tear to see the "Trilinear Optimisation" mode go away. And just force HQ all the time. The performance you get from using is not even worth having it the control panel these days.
 
Please do, I want to hear Jawed drop from his comfortable horse-hair filled, silver lined, desk chair!
I "upgraded" from the bamboo cane chair I was using to a wooden, uncushioned, dining chair. The only seating related hair round here is hair shirt. Nice headphones, a nice 22" CRT monitor, my clacky IBM keyboard and classic Explorer 3 mouse are ample compensation though :D

Jawed
 
I admit, that I'm definetely not a financial expert, but tell me please, why Jen Hsun explained part of their loss by a huge stock of unsold GT200/65nm GPUs, if their costs aren't accounted?

I don't know, to be honest. Could be overheads, which normally are covered by the contribution margin you realise when you sell your stock.
 
There are games (very few for now) that will show a hit even in 1280X1024.
For example Far Cry 2.
The 1GB ver is 10%-15% faster than the 512MB ver in 1280X1024
15%-20% faster than the 512MB ver in 1680X1050
20-25% faster than the 512MB ver in 1920X1200

The above are average fps.

The real problem are the minimum fps. (the difference is way higher than in the average fps)
(the low minimum fps are making the games feel much slower than what the average fps suggest...)
Clearly I'm out of touch. Bit embarrassing really. I was actually planning on buying the 2GB HD5870 (even though I hate the cooler, damn it's yucky and despite the fact the performance is worryingly low), but that's because I always buy "excess" memory.

The thing that gets me about the performance of HD5870 is that it appears AMD is basically saying "that's it, we're bandwidth limited and GDDR5 won't go much, if any, faster".

This is a dangerous point because I strongly believe Larrabee is considerably more bandwidth efficient. So, either R900 is a total rethink in the Larrabee direction or AMD's fucked in 18 months. I don't care how big Larrabee is (whatever version it'll be on), I want twice HD5870 performance by the end of 2010. The dumb forward-rendering GPUs are on their last gasp if memory bandwidth is going nowhere.

Of course if AMD can make a non-AFR multi-chip card that doesn't use 2x the memory for 1x the effective memory, then I'm willing to be a bit more patient and optimistic.

Jawed
 
Back
Top