AMD: R7xx Speculation

Status
Not open for further replies.
Certainly it looks well placed to invalidate much of the G8x/G9x lineup. And the GTX260 is probably going to have a hard time selling if the 4850 undercuts it by much.
I wonder if HD4870 will "match" GTX260 (GTS260?), since GTX 280 is meant to be 1.5x+ faster than 9800GTX and GTS260 is <80% of GTX280.

Jawed
 
Since late on in the Catalyst 5 series, IIRC. Your opinion is literally years out-of-date.
Not in my experience. My ATI card is far better using DirectX than it is OpenGL, whereas on my other PC with a budget series 6 geforce varying between the two doesn't make a great deal of difference.
 
I wonder if HD4870 will "match" GTX260 (GTS260?), since GTX 280 is meant to be 1.5x+ faster than 9800GTX and GTS260 is <80% of GTX280.

Jawed

I'm working off the assumption that the 280 is going to be 1.5x faster than the GX2 rather than the GTX.

I haven't been following things that closely though so maybe that asusmption has been invalidated already.
 
I'm working off the assumption that the 280 is going to be 1.5x faster than the GX2 rather than the GTX.

I haven't been following things that closely though so maybe that asusmption has been invalidated already.
1.5x GX2 is a tall ask, probably best-case scenario like where the GX2's frame buffer vs GTX 280's.
 
I'm working off the assumption that the 280 is going to be 1.5x faster than the GX2 rather than the GTX.

I haven't been following things that closely though so maybe that asusmption has been invalidated already.
Put another way, if GTS260 is 0.75x GTX280 and the latter is 2x faster than 9800GTX, then in broad terms it's 1.5x faster for GTS260 versus 1.4x faster for HD4870, for $100+ less.

So, ahem, the question is, which of these two needs the stronger tail wind?...

It is kinda amusing that HD2900XT was set against the 8800GTS and it seems HD4870 is going against the new GTS.

Jawed
 
I think AMD might have cut the vcore a bit to get the 4850 down to one slot - it might not overclock well without a voltage bump.

They must've. The decrease in clockspeed from HD4870 does not even remotely correspond to the decrease in TDP, so voltage is the only other variable. Technical deduction aside, using a lower input voltage just plain makes sense from a manufacturing POV (yield), same goes for thermals.
 
Who's was talking about games.

I use a lot of emulation programs, and selecting an openGL filter always grinds down the performance.

Not a common case then... Who *wouldn't* talk about games when referring to graphics APIs DESIGNED FOR GAMING? Poor perf. via emulation has always been an issue, regardless of the platform. Your apps are likely illegal anyway, so why should an IHV invest resources in optimizing for them?
 
The ROMs may be illegal but not the applications themselves.

** And on that note, with emulation becoming common on console platforms, perhaps it is being optimized for by IHVs
 
Since when has ATI held it's own with opengl?

In my limited experiences (Used to use nvidia religiously now ATi) they have always been considerably slower using opengl than nvidia?

DOOM 3 (OpenGL) : Old days / 4Q-2007

EDIT: another recent bench

EDIT #2: NEWS -> June 11, 2008 - AMD today announced that it has reached a new milestone in the mobile graphics industry as the first 3D graphics technology provider to achieve OpenGL ES 2.0 conformance certification.
 
Last edited by a moderator:
Not a common case then... Who *wouldn't* talk about games when referring to graphics APIs DESIGNED FOR GAMING? Poor perf. via emulation has always been an issue, regardless of the platform. Your apps are likely illegal anyway, so why should an IHV invest resources in optimizing for them?
But, never the less, it proves my point.

Having to add a fix per game is not having good opengl performance, it is working around what is wrong in their driver so it works.

I am an ATI fan, so don't see this as an nvidia fanboy passing judgement.

I was very disillusioned when I played knights of the old repbulic on my X1950 and it was barely faster than it was when I played it on my old PC with a geforce 3 ti 500, granted I was using AA and 1024x768 , but I was expecting a much larger jump (and saw it too in DX based games)
 
But, never the less, it proves my point.

Sure, for your specific usage. This only holds true for you though, unlike "normal" PC gaming which it applies to anyone that plays OGL titles.

Having to add a fix per game is not having good opengl performance, it is working around what is wrong in their driver so it works.

This statement implies poor understanding of the state of the modern graphics industry, as well as how drivers work, in particular.

I am an ATI fan, so don't see this as an nvidia fanboy passing judgement.

No one's calling anyone a fanboy here. I was just trying to correct your misunderstanding of ATi's OGL performance.

I was very disillusioned when I played knights of the old repbulic on my X1950 and it was barely faster than it was when I played it on my old PC with a geforce 3 ti 500, granted I was using AA and 1024x768 , but I was expecting a much larger jump (and saw it too in DX based games)

I'm not sure why you would expect a CPU-limited title (using CPU-limited settings) to perform better with a GPU upgrade... I've played KOTOR on my X1650XT @ 1280x1024 w/all settings maxed and it ran fantastically. This would've been last year on a 7-series driver.
 
OpenGL benchmark -> Phoronix

Conclusion: "It's phenomenal to see the Linux changes made with AMD, from them taking six months or more to support their product families in the past and that yielding less than desirable results to now a point of reaching Linux performance supremacy and nearly all of the same features available to Windows users."
 
This was ages ago but I remember having serious issues with Medal of Honor (Quake 3 OGL Engine) when I got my 9500pro. My performance was about 1/2 that of the performance I saw with my GF3... That's the only OGL title I had problems with as far I can remember.
 
I was just trying to correct your misunderstanding of ATi's OGL performance.
They've certainly tuned-up the performance of the handful of available modern OpenGL titles, which is a good thing. But their OpenGL driver still leaves a lot to be desired last I checked (2-3 months ago).

Namely they have made no effort to support anything beyond DX9.0-level features (they don't even support vertex texture fetch on R600+... that's not even an extension!). Furthermore there's no arguing that NVIDIA's OpenGL driver more consistently delivers good performance. ATI/AMD's does okay with the standard stuff, but has a lot of nasty performance cliffs for certain features or engine designs. NVIDIA certainly is guilty in a few places here too, but much less than ATI/AMD in OpenGL.

Don't get me wrong: I really like ATI and hope that they do well in the future. Their Direct3D driver in particular is an absolute joy to use. But the fact remains that their OpenGL driver still has a lot of bugs when not running it through the "Doom3" path, it is still missing a ton of features (even DX9 stuff!) and it is current impossible to write anything resembling a "modern" (DX10+) engine on an ATI card in OpenGL.

That said, maybe R7xx will change all of that? Please?
 
Status
Not open for further replies.
Back
Top