Inquirer spreading R420 info

BRiT said:
F-Buffer? Tweaks to 4x-AA? Can't recall if that was intro'd in the R350 or just the r360s.
ATI has yet to implement F-Buffer support. If ATI finally does implement the F-buffer, we don't currently know whether or not the 9700 will support it. I don't remember any changes to the FSAA algorithm.
 
The tweaks to 4x-AA are minor speed tweeks. FWIR, it takes less of a hit on the R350 than it does on the R300.
 
The Baron said:
ANova said:
I understand where your coming from but consider this. How many games are available for Linux? And how big is the Linux userbase in comparison to Windows? Obviously because of these reasons ATI has put Linux drivers on the backburner.

If your a Linux buff then you shouldn't go ATI.
Doesn't mean you shouldn't be annoyed that ATI doesn't have Linux support that's anywhere near the level of NVIDIA. If NVIDIA can do it, why can't ATI?

I guess it all depends on where you want to put your resources. Because I don't use Linux I would be pissed if ATI spent all their time developing drivers for it and let the windows drivers lack in features.. I guess the bottom line is if you use linux (all three of you :rolleyes: ) you should buy a Nivida card and quit complaining about ATI's lack of support.

If things change in the future and everyone starts using Linux I would bet ATI would increase their support.
 
Way back, I believe Anand had some comparisons that showed how the R350 was more efficient clock-for-clock than the R300, especially with AA/AF enabled... others are saying most of Smoothvision 2.1 is actually done in the driver and that the same boost is also available on R300. F-buffer is pretty much dead though ( at least in the consumer market, dunno about professinal applications using it ).
 
Chalnoth said:
MrBond said:
Randell said:
GF3 - Gf3Ti500/Gf3Ti200 - which is exactle the same as 9700Pro - 9800Pro in execution. An extra refresh in the XT was added because of the lengthening product cycle.
Except I would argue that the ti500 didn't differ much at all from the original GF3, aside from higher clocks to settle the 8500/GF3 debate for the next couple of months.
I don't seem to remember the 9800 Pro offering anything new, either...

Exactly, and that's why as a consumer I felt let down by both IHVs. It may help in the long run as they can obviosuly recoupe R&D costs and if I could trust them for a second I'd think all those earnings would get ploughed striaght back into the next card. Unfortunately I suspect both will send shed loads of cash to the shareholders; great for them but less so for the consumer. This combined with the risk of the next gen parts showing little areas of IQ improvement in areas of FSAA and AF for example means I'm not going to be thrilled. I've already condemned NV40 in my minds eye and as such it won't be getting my custom. As for the R420 well that remains to be seen as well?
It started to worry me when both IHVs apparently had talks and agreed to slow the refesh cycles. Hmmm, very convenient. ATI then shove the 8500 into the bottom section as opposed to a true DX9 card and the R3x series is tweaked over 18 months. Timelines like these are really going to hurt IMHO unless you can deliver a massive increase across all areas of the 3D cards arena, IQ, features and speed. As we've seen NV40 only delivers on two of the three and I suspect ATI will only deliver on one of the three. It's then a pain to see just how quickly Nvidia can react to bring out a proper refresh part when the chips are down. Just look at the 5700, much better than the 5600, is it a coincidence that Nvidia needed a competing midrange card, but hey, what happened to the idea of an 18 month cycle. If it suits them and the blasted shareholders a new card comes screaming down the line :(
I know real world economics can't/don't work like this and I'm being extemely one sided and unrealistic to expect a flashy new core every 6-12 months but currently I do think the IHVs (ATI especially) are too concerned with their shareholders than thier consumers and have decided for themselves the pace we all should have to move at.
 
anaqer said:
F-buffer is pretty much dead though ( at least in the consumer market, dunno about professinal applications using it ).
But if it's not supported by ATI's drivers, how can anybody make use of it?
 
Doomtrooper said:
There is so much wrong with that above post I won't even comment, man.

lol, go on DT you know you want to. As I said mind, I'm only having a little rant so don't feel the need to blow a fuse.

:)
 
I really think people have no clue how technology works, the reason ATI and Nvidia are slowing down is the same reason Intel and AMD have slowed down. All these companies have reached the technology barriers, and unless someone has some technology stolen from AREA 51 a natural slow down is to be expected.
 
Chalnoth said:
BRiT said:
F-Buffer? Tweaks to 4x-AA? Can't recall if that was intro'd in the R350 or just the r360s.
ATI has yet to implement F-Buffer support. If ATI finally does implement the F-buffer, we don't currently know whether or not the 9700 will support it. I don't remember any changes to the FSAA algorithm.

I always thought the F-buffer was a way to run longer shaders...
which would only make it useful in OpenGL i guess..
and was impmented on 9800 series..

Always sounded more like a "profesional 3d apps feature" to me..
and maybe to some extent counter the rather higher instruction limit
FX had.. (PR reasons, wouldnt need it in games i assume.)
 
Doomtrooper said:
I really think people have no clue how technology works, the reason ATI and Nvidia are slowing down is the same reason Intel and AMD have slowed down. All these companies have reached the technology barriers, and unless someone has some technology stolen from AREA 51 a natural slow down is to be expected.

Doom with all due respect don't be a pompus twit! Do you think ATI pushed the barrier that much that they couldn't deliver a genuine DX9 low end part, or that they couldn't have done more with the R360? Did they really need the secrets of Area51 to help them.
The bottom line as with any company is profit and margins. Although memory may soon hold back the big two in the Mhz race both have plenty of room and a very healthy margin.
 
Seiko, what good would DX9 do you in a 9200-level card, besides looking good on paper?

There's no need for name-calling. You're not respecting anyone by calling them a "pompous twit." And you're not respecting reality when someone can buy a "low-end" DX9 card in a $100 128MB 9600. I'm sorry, but if you're expecting decent speed in new games out of a $50 card, you're mistaken. Besides, anyone who can spend $50 on a video card can save up a few more months and pick up a $100 9600, or can cut out the middle-man and buy a used 4200 for less.
 
Seiko said:
Chalnoth said:
MrBond said:
Randell said:
GF3 - Gf3Ti500/Gf3Ti200 - which is exactle the same as 9700Pro - 9800Pro in execution. An extra refresh in the XT was added because of the lengthening product cycle.
Except I would argue that the ti500 didn't differ much at all from the original GF3, aside from higher clocks to settle the 8500/GF3 debate for the next couple of months.
I don't seem to remember the 9800 Pro offering anything new, either...
Exactly, and that's why as a consumer I felt let down by both IHVs. It may help in the long run as they can obviosuly recoupe R&D costs and if I could trust them for a second I'd think all those earnings would get ploughed striaght back into the next card. Unfortunately I suspect both will send shed loads of cash to the shareholders; great for them but less so for the consumer.
No offense here but your viewpoint is extremely childish. Do cars get twice as fast every year? Of course not. Do people feel cheated because cars are not getting twice as fast every year? Of course not.
It started to worry me when both IHVs apparently had talks and agreed to slow the refesh cycles. Hmmm, very convenient.
What talks were these? You mean some rumored ones? Give me a break.
ATI then shove the 8500 into the bottom section as opposed to a true DX9 card and the R3x series is tweaked over 18 months.
Buy what card works for you. Does it matter if it's DX8 or DX9? Not really. If the card meets your needs, who cares?
Timelines like these are really going to hurt IMHO unless you can deliver a massive increase across all areas of the 3D cards arena, IQ, features and speed. As we've seen NV40 only delivers on two of the three and I suspect ATI will only deliver on one of the three. It's then a pain to see just how quickly Nvidia can react to bring out a proper refresh part when the chips are down. Just look at the 5700, much better than the 5600, is it a coincidence that Nvidia needed a competing midrange card, but hey, what happened to the idea of an 18 month cycle. If it suits them and the blasted shareholders a new card comes screaming down the line :(
Again, your view is very childish. First, the 5700 is the same as the 5600 just on a new process. Changing processes is relatively easy (i.e. doesn't require 18 months or more to do). Complete design of a new chip takes far longer (that's the 18+ months people are referring to).
I know real world economics can't/don't work like this and I'm being extemely one sided and unrealistic to expect a flashy new core every 6-12 months but currently I do think the IHVs (ATI especially) are too concerned with their shareholders than thier consumers and have decided for themselves the pace we all should have to move at.
Since you obviously have no clue about what it takes to design and build a chip, why don't you just assume that ATI and NVIDIA are doing the best they can?

-FUDie
 
FUDie said:
Again, your view is very childish. First, the 5700 is the same as the 5600 just on a new process. Changing processes is relatively easy (i.e. doesn't require 18 months or more to do). Complete design of a new chip takes far longer (that's the 18+ months people are referring to).

This is not true. The 5700 is not the same as the 5600. The changes to it's core are similar in nature to the changes made to the core from 5800-->5900.
 
bdmosky said:
FUDie said:
Again, your view is very childish. First, the 5700 is the same as the 5600 just on a new process. Changing processes is relatively easy (i.e. doesn't require 18 months or more to do). Complete design of a new chip takes far longer (that's the 18+ months people are referring to).
This is not true. The 5700 is not the same as the 5600. The changes to it's core are similar in nature to the changes made to the core from 5800-->5900.
If true, then it's likely that a lot of the same work was copied from the 5900 and that a whole new design wasn't created. Also, we know that a lot of work on the 5900 was done even before the 5800 was released since the 5900 came out so quickly after the 5800. Likely, the 5700 was nearly completed as well. Anyway, my point was that companies don't just "rush a whole new design in 6 months because it will help the shareholder" because it's physically impossible to do so.

-FUDie
 
Well I thought ATI & NV are accelerating not slowing down

I mean, look at the gap from gf4 to 9700p to 6800u (x800xt if it is 600mhz)
That ain't a slow down.
 
Chalnoth said:
anaqer said:
F-buffer is pretty much dead though ( at least in the consumer market, dunno about professinal applications using it ).
But if it's not supported by ATI's drivers, how can anybody make use of it?
Maybe ATI will expose F-buffer to make R350 and R360 PS2.0_b compatible?
 
UPO said:
Maybe ATI will expose F-buffer to make R350 and R360 PS2.0_b compatible?
PS 2.0b exposes 512 maximum instructions (the maximum for PS 2.x), 32 registers, and unlimited texture instructions.

Now, I could see the f-buffer allowing unlimited texture instructions and 512 instructions, but I really don't see it offering 32 registers.

But more importantly, PS 2.0b does not support unlimited dependency on texture reads. It's still 4 levels of dependency. That is a red flag to me that this is still one pass only, not auto-multipass via the F-buffer.
 
Back
Top