AMD: R9xx Speculation

6970
3DMark Vantage = X12000, P24499.
Unigine Heaven (1920 1200 4AA +16 AF) = 36.6

5870
Unigine Heaven (1920 1200 4AA +16 AF) = 17.3
I'm not sure if this is new or not...Is there any other information about this card?
 
Last edited by a moderator:
If you remember HD2900XT's launch and all the demos and presentations, there was a demo of tessellated hills, which were very similar to the hills in HAWX2. This demo didn't even use adaptive tessellation, wireframe was quite dense and it ran around 80 FPS on the HD2900XT. I'm not sure, but I think these tessellated hills (probably optimized by adaptive tessellation?) were used in the Ruby demo and very similar ones were used in Froblins demo. All these demos ran well even on the 3-4 years old hardware. I see now reason, why the HAWX2 (with almost exactly looking landscape) should be so much slower. Hardware tessellation is faster as long as the code doesn't contain viral optimisations.

Maybe the hints in B3D's architectural analysis not only of Fermi can give you a hint?
http://www.beyond3d.com/content/reviews/55/10
One passage says:
"Given all this, fatter control points (our control points are as skinny as possible) or heavy math in the HS (there's no explicit math in ours, but there's some implicit tessellation factor massaging and addressing math) hurt Cypress comparatively more than they hurt Slimer - and now you know how the 3 clocks per triangle scenarios come into being, a combination of the two aforementioned factors."

I don't know for sure, but since Tessellation didn't seem that programmable before, this exact problem might not have existed when they did their techdemos? And maybe it even caught them off-guard, having tessellation hardware for such a long time thinking it would all be well and Nvidia would be doing "slow software emulation" anyway - as Charlie hinted more than once?
 
No problem. :) Should you run into problems with the hack, keep trying. Its really worthy to have MLAA option in my opinion. Check the other thread for screenshots. :D

I was playing COD4 last night and I thought the edges looked much better compared to the usual 4xAA and 16xAF. I didn't notice any tearing and the telephone lines and edges do look much better than before(imo).

However, the ingame fps monitor shows that FPS has actually dropped. I can't blame MLAA but I think it has to do with the amalgamated settings in the CCC as opposed to the older CCC settings. Before the game would constantly stay at 125fps, now the fps is about 110fps.
 
I was playing COD4 last night and I thought the edges looked much better compared to the usual 4xAA and 16xAF. I didn't notice any tearing and the telephone lines and edges do look much better than before(imo).

However, the ingame fps monitor shows that FPS has actually dropped. I can't blame MLAA but I think it has to do with the amalgamated settings in the CCC as opposed to the older CCC settings. Before the game would constantly stay at 125fps, now the fps is about 110fps.

Honest Question, were you still running in-game AA, or did you disable it?

On games that allow it, I'm doing something like 4xMSAA (4x ingame, Adaptive AA setin in CCC) and then forcing 8xMLAA through the control panel.. yes, performance hit.. but it's way less than anything else for the comparative image quality.
 
You said the framerate dropped, compared to no AA or compared to if you were already running 2x or 4X?

Already running 4xAA ingame with old CCC settings(default) the fps was 125fps. The new CCC and default settings(other than MLAA and Surface format ticked) the fps is about 110fps.
 
MLAA + text, possible solution:

1. Add alpha-mask for the text and HUD in post-processing. This can either be done by developer or AMD/Nvidia through Profiles.

2. Run MLAA and enjoy near 100% perfect result :smile:

This doesnt affect negatively if player doesnt use MLAA, if he does use - pure visual sexiness ;)

Another way - advanced MLAA algorithm who can notice the text pattern and doesnt apply MLAA filter to those pixels. Positive - no need to bother with masks, negative - output image would still have deterioration, since it wont help with HUD (unless its specifically made with MLAA in mind), and not always with text either.

In the ideal world it would be the best if devs use masks and implement different levels of MLAA, i.e. player could choose the level of MLAA strength he prefers.
 
6970
3DMark Vantage = X12000, P24499.
Unigine Heaven (1920 1200 4AA +16 AF) = 36.6

5870
Unigine Heaven (1920 1200 4AA +16 AF) = 17.3
I'm not sure if this is new or not...Is there any other information about this card?

that is likely the 6950 that is 25% more than the 480gtx and the 6970 likely be 40-60% more of that.
if not they be spanked by the new 580gtx.
 
6970
3DMark Vantage = X12000, P24499.
Unigine Heaven (1920 1200 4AA +16 AF) = 36.6

5870
Unigine Heaven (1920 1200 4AA +16 AF) = 17.3
I'm not sure if this is new or not...Is there any other information about this card?
Those are the same fake results that appeared a while back , they were labeled as "6800" results , now they are 6900 ?
 
One thing I've been wondering about: If I took a HD6870 and undervolted/underclocked it to HD6850 levels, would the power consumption be similar?
Also, is there a guarantee that all HD68xx models will actually undervolt? Some HD46xx cards, for example, did not have the ability to undervolt at idle.

I'm going to do this experiment. I've ordered the Asus 6870, and intend to drop GPU voltage to 1.0V, and frequency to 800 MHz. According to TechPowerUp voltage vs. frequency data, the 6870 is pushed quite close to its voltage/frequency limits. The flip side of that coin is that by their graphs, if you're just prepared to lower GPU frequency slightly, you should be able to substantially lower voltages. The settings above look attainable while still maintaining a reasonable safety margin. If so, GPU power should drop to (1.0/1.2)^2 * (800/900) = 0.62

Meaning I could get 90-100% (depending on game and settings) of the normal 6870 performance at less than two thirds the power. At least, that's the theory. We'll see. The purpose of the exercise is to get a graphics card that doesn't have a particular DisplayPort problem of the 5XX0 series, and which is also quiet. This may be attainable with the stock 6870 cooling solution with custom fan settings. Or not.

To keep consistent with AMD naming conventions, I'll call the resulting card "Radeon 6970 Mobility".
 
To keep consistent with AMD naming conventions, I'll call the resulting card "Radeon 6970 Mobility".

First off, no, that will change.

Second, the 5770 mobile was called 5870 mobility because the true 5870 mobility, the Lexington chip got cancelled.
 
Any leaks on Cayman yet, specifically the 6950? I think this is going to be the .....real upgrade for HD 4800 and not Barts XT....

6950 should replace 5870..like what 6870 did to 5850...not expecting major perf upgrades but fast enough to put it ahead. BartsXT has 14 SIMD, so Cayman Pro should get 16 SIMD for a total of 1440 sp but with higher clock of 850mhz? ROPs will stay at 32 (48 for XT)? TMU 64? Assuming 256bit and GDDR5 will not go beyond 1200mhz....how will 4D shaders improve the speed, is this the (removed) point number 7 for faster tessellation? what does the off chip cache sound like to you? Die size 300m? any way you arrange it...the arch seem dead ended on 40nm....the beating of HD5800 parts with lesser sp comes down to higher core speed.

Important thing the price..299? Fair? Possible?
 
Any leaks on Cayman yet, specifically the 6950? I think this is going to be the .....real upgrade for HD 4800 and not Barts XT....

6950 should replace 5870..like what 6870 did to 5850...not expecting major perf upgrades but fast enough to put it ahead. BartsXT has 14 SIMD, so Cayman Pro should get 16 SIMD for a total of 1440 sp but with higher clock of 850mhz? ROPs will stay at 32 (48 for XT)? TMU 64? Assuming 256bit and GDDR5 will not go beyond 1200mhz....how will 4D shaders improve the speed, is this the (removed) point number 7 for faster tessellation? what does the off chip cache sound like to you? Die size 300m? any way you arrange it...the arch seem dead ended on 40nm....the beating of HD5800 parts with lesser sp comes down to higher core speed.

Important thing the price..299? Fair? Possible?

My bet is $299 for Pro and up to $399 for XT.
 
Fermi-level performance for $150 better price? That would be really nice, but isn't it overly optimistic?

Well I don't know about $299, but Cayman Pro can't be too expensive either, because that would leave a big hole between it and Barts XT, which is precisely what allowed NVIDIA to gain marketshare with the GTX 460, except the hole was between Cypress Pro and Juniper XT.
 
Fermi was expensive because of the large die size..and 40nm production problem?
Did the 5850 not launch at 299? ... 299 for 6950 sound all but expected (maybe 329...sucks!)? Who knows if the renaming scheme was planned for AMD to introduce...higher price segments....Perf wise it should lay between 480 and 5870...and seeing some of the benches..6870 actually kept up within 4-8fps out of 40-60+fps of 5870 for many games(thus not a percentage thing)...while other games see an explosion of double digit fps gain...always wondered whats up with that?

With AMD 2nd gen 40nm tune up...i see no reason for Cayman Pro to hit beyond 300 bucks..gonna grab one asap at 299....expecting to see the lame 5850/5870 srp creep up..

current prices are

$240 - 6870
$260 - 470
$270 - 5850
$380 - 5870
$440 - 480

the highest end parts are always overpriced.
 
Did the 5850 not launch at 299? ... 299 for 6950 sound all but expected (maybe 329...sucks!)?
299$ for 6950 would be great for us, but its too close to 6870, therefore initially it should be slightly above 300$ IMO, 330$ being max (will depend on performance too). Then add 100$ for 6970, which should spank GTX480 (even 6950 should take care of that).
 
Back
Top