Ok, as an 8500 owner...

sumdumyunguy

Newcomer
I have to ask. With these prices (for the mid TI product) how well will ATI do in the interem until the R250/and or R300. I know that they have upped to memory to match (other than higher resolutions, what other beneifits, if any?) at 128MB, but really. Are they about to take a serious beating for the next few months.

p.s. there is a rumour going that says that ATI has completely discontinued the original 8500 part....
 
Well, the part that should be directly competing the Radeon 8500, price wise, is the GeForce4 Ti 4200 -- and that's not due out for a couple months.

If all ATI has to respond to the GeForce4 is a 128 MB Radeon 8500, then yes, it appears to me that nVidia will own this round. ;)

What we don't know, is what the RV250 is, or when it's to be expected. Whatever it is, ATI needs to get it out the door around the April time-frame.

I'm not expecting any DX9 part (R-300 or NV30) until the fall.
 
Ditto to what Joe said, I'm quite pleased with the 8500 feature set. I may try a 8500 128 meg as I have my Retail 8500 sold as of last week. The Geforce 4 prices are much to high in Cananda at this time or I'd give a 4400 or 4600 a shot, I just don't have $750 dollars right now.
 
Well for me it seams like going from my Radeon 8500 to a GF4 would be a downgrade. It looks like those "GF4 doesn't bring anything new" claims were more or less right. Looking through the list of OpenGL extensions at xbits review I couldn't find any extension I haven't heard of before.
 
FYI: Dan Vivoli seemed pretty confident that nVIDIA will be the first to market with a DX9 compliant part. We'll see if that comes about. I have seen one developer say that ATI stated R300 would be a Xmas 2002 part.
 
On 2002-02-06 21:35, Humus wrote:
Well for me it seams like going from my Radeon 8500 to a GF4 would be a downgrade. It looks like those "GF4 doesn't bring anything new" claims were more or less right. Looking through the list of OpenGL extensions at xbits review I couldn't find any extension I haven't heard of before.

GL_NV_texture_shader3 is a new extension only supported by NV25.
 
Although not GF3/4 specific the nv_vertex_array_range and nv fence usage is a very nice set of extensions I'd like to see used elsewhere.

It offers quite a performance boost even on high-end systems and makes perfectly good sense (i.e. why double copy every stinkin' vertex array twice? Write it once then go..).
 
On 2002-02-06 23:37, DaveBaumann wrote:
FYI: Dan Vivoli seemed pretty confident that nVIDIA will be the first to market with a DX9 compliant part. We'll see if that comes about. I have seen one developer say that ATI stated R300 would be a Xmas 2002 part.

When is the Dx9 release scheduled , Oct/Nov/02 ??
 
On 2002-02-07 00:59, pcchen wrote:
On 2002-02-06 21:35, Humus wrote:
Well for me it seams like going from my Radeon 8500 to a GF4 would be a downgrade. It looks like those "GF4 doesn't bring anything new" claims were more or less right. Looking through the list of OpenGL extensions at xbits review I couldn't find any extension I haven't heard of before.

GL_NV_texture_shader3 is a new extension only supported by NV25.

Yeah, I suppose that's it.
 
On 2002-02-07 06:10, multigl wrote:
if all goes well, the R300 is a fall part.

so multigl (I'm lovswr at rage3d), you don't think that the fall would be too late? As you know over at rage3d, many peeps have been touting an earlier arrival of the R300 (R250 perhaps?). Some say as soon as next month. I like my 8500. Just dissapointed that the top of the heap got "pushed up" so soon.
 
ATI will do very well.

Unfortunatelly I changed back to the dark side (Nvidia). I just bought a new Asus GF3 Ti200 Deluxe :D

I wanted DX8 and also video capture (Video IN, VCR MPEG-2) and video OUT. Unfortunatelly the Radeon 8500DV is 40% more expensive here in Brazil :cry:

With the difference I bought a new TDK Velocd CD writer :cool:

<font size=-1>[ This Message was edited by: pascal on 2002-02-07 18:16 ]</font>
 
Pascal,

You proabably would have been better getting a Tvwonder add in card or some other tv capture card...that way you can get the top end peformance card and also have the capture/tv features too.
:smile:
 
For curiosity some prices here (I was on vacation and did some search):

8500DV -------------- US$ 552
ASUSGF3Ti200Deluxe -- US$ 399
MSIGF3Ti200 --------- US$ 300 (no video in, VCR)
Hercules4500 (kyroII) US$ 220
Radeon 32MBDDR ------ US$ 160

No standard/OEM Radeon 8500 avaliable.

The GF3 drivers are improving, see the leatest Unreal Performance Test 2002 - Build 856 at 1024x768x32 : http://www.anandtech.com/video/showdoc.html?i=1583&amp;p=12

It is side by side with 8500LE :D

I installed it this afternoon and the performance is good, but RTCW is not working, Q3TA sky is not very good. Well, I am still learning :smile:
 
I cant believe that :eek: http://news.zdnet.co.uk/story/0,,t269-s2103925,00.html
Nvidia to phase out GeForce3
13:08 Thursday 7th February 2002
Matthew Broersma and Jonathan Bennett


As GeForce4 comes onstream the graphics chip maker will drop GeForce3 from its range, using an older product for its budget line. Sources say the next generation will use a fully programmable architecture
Nvidia is to phase out its GeForce3 line of graphics chips entirely as it makes room for the new GeForce4, the company said on Wednesday. Instead, its low-end product will become the GeForce2 MX, complementing the midrange GeForce4 MX and the high-end GeForce4 Ti.

Sources close to the company have also suggested that Nvidia will move to a fully programmable architecture with its next chip design. Such a move could mean a much shorter lag time between when new features are introduced into a graphics chip and when they appear in new games. Current chips have their functions fixed in hardware.

The GeForce4 is much faster than its predecessor, last year's GeForce3 Ti, offering more than double the speed on certain features. Nvidia excecutives said GeForce chips are improving at a rate of "Moore's Law cubed", with power doubling every six months. Moore's Law states that the power of a processor doubles every eighteen months. The downside of this, however, is that other components such as memory may end up forming a bottleneck within the graphics card itself, Nvidia said.

Even so, Nvidia estimates that 3D moving images rendered on consumer graphics cards will not reach the same quality as filmed images for another 10 years.

Executives spoke at the Atomium in Brussels on Wednesday for the European launch of GeForce4.

Nvidia did not explain why it would be dropping the GeForce3 line, but it is likely that the company does not want its budget products competing too closely with the midrange and high-end lines. GeForce4 MX is a stripped-down version of GeForce4 Ti, lacking some high-end features like hardware vertex shaders, instead relying on software for vertex shading. Vertex shading is a technique for more realistic rendering of gradations in 3D objects.

As a result, GeForce4 MX does not offer the same massive performance boost over GeForce3 that is to be found in the GeForce4 Ti line. GeForce4 MX does, however, include some of GeForce4 Ti's new features like multiple display support and hardware-based anti-aliasing.

The GeForce4 chips uses a dedicated hardware engine to carry out the processing work of anti-aliasing, which smoothes the jagged edges of 3D images, Nvidia said. The dedicated hardware means the load on other components of the processor, such as the pixel shaders, is reduced. For users, that means the performance cost of turning on such features is lower; for example, the top-of-the-line GeForce4 Ti 4600 with the basic "Quincunx" level of anti-aliasing turned on runs faster than the GeForce3 Ti 500 without anti-aliasing.

The next generation of Nvidia chips will offer a feature to make game developers' lives easier in the form of fully programmable hardware, according to sources. This means that unlike with current chips, which implement features in hardware, developers will be able to specify which functions the chip carries out using a firmware image.

Game developers would be able to test out new chip features even as Nvidia is developing them, meaning a far shorter lead time before those features find their way into new games. It currently takes about 12 months from the time a new graphics chip is launched before new games can fully take advantage of it.

A fully programmable architecture would also make it easier to build multi-processor graphics cards, as one GPU (graphics processing unit) could be programmed to carry out half of the chip's functions while another GPU took care of the remaining functions.

The Radeon 8500 (or GF3 Ti200) is 70% to 80% faster than the GF4MX 460 when playing next generation games like Unreal 2 and they want people to buy the GF4MX ??????

The GF4MX is where they are targeting the big OEM. People will say "GF4 is the fastest graphics accelerator", and the kids will say "Please dad buy me a new Dell (or other OEM) with GF4" :mad:

<font size=-1>[ This Message was edited by: pascal on 2002-02-08 18:14 ]</font>
 
Of course thats what they'll do.

Nvidia wants that to happen... because after they find out they can't play the games they wanna play well, they'll whine until daddy goes out and spends another few hundred bux on an Nvidia card that CAN play their games.
 
Back
Top