AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Except silly stuff like tessalization and compute shaders. :p
Oh, yes, for so many years , version after version of D3D, we've seen exceptionally looking, fun to play games utilizing flawlessly new features to go out in 1-2 months timeframe after API being available... :rolleyes:
Name one, just one such example until now. The games above will have checkbox "DX11", which will enable funny looking rounded guns. Or after using per pixel comparison between shots in D10vsD11 we'll have 10 different pixels.

any ideas how big is the market for eyefinity?
My guess is <0.1% or 0.01% :p, so why bother.
 
I find your lack of faith disturbing.

So does GOD,

be using eyefinity ;)

The Bezels will diasspear as the vision bascially delets the bezels after a time.
same as the eye´s blind spot, one is seeing due to the eye and brain overlaps to compensate.
3x1 screens be common used.
 
Last edited by a moderator:
No, you are not limited to the max supported resolution. The software will enumerate the panels and pick multiples of resolutions that all panels support. If you have 3 panels of all the same type that support 1024x768, 1280x1024, 1920x1080 (for example) the following SLS (or Display Group, as they are called in CCC) resolutions are available:

3x1 Landscape:
3072x768
3840x1024
5760x1080

3x1 Portrait:
2304x1024
3072x1280
3240x1920

1x3 Landscape:
1024x2304
1280x3072
1920x3240

When a display group is defined, then this is "seen" by the OS as a single panel and the OS (or CCC) treats it as such, allowing all the resolution configurations that the display group supports to be used - that is then available to the OS itself and other applications.

Thanks!
 
No, you are not limited to the max supported resolution. The software will enumerate the panels and pick multiples of resolutions that all panels support. If you have 3 panels of all the same type that support 1024x768, 1280x1024, 1920x1080 (for example) the following SLS (or Display Group, as they are called in CCC) resolutions are available:

3x1 Landscape:
3072x768
3840x1024
5760x1080

3x1 Portrait:
2304x1024
3072x1280
3240x1920

1x3 Landscape:
1024x2304
1280x3072
1920x3240

Will the drivers support mixed landscape and portrait mode? Such as 2 20" 16x12 in portrait, and 1 30" 2560x1600 in landscape.
 
Yes, but not as a single display surface (you've seen the pics, right?)

No, I did not see pictures of mixed mode yet.

I kinda thought that would be the case, but it doesn't hurt to ask. It is a shame though, since getting 2 more 30"s is expensive and too wide for my setup. I can easily repurpose my old 20s for this setup if it worked (as SLS)...
 
Stteam shows 15% of it's users have multi-monitors.
Meh, that stat is wrong...
No comparison of multi-monitor vs single...
That ~15% is of a specific multi-monitor res vs other multi-monitor res.

Oh, yes, for so many years , version after version of D3D, we've seen exceptionally looking, fun to play games utilizing flawlessly new features to go out in 1-2 months timeframe after API being available... :rolleyes:
Name one, just one such example until now. The games above will have checkbox "DX11", which will enable funny looking rounded guns. Or after using per pixel comparison between shots in D10vsD11 we'll have 10 different pixels.

any ideas how big is the market for eyefinity?
My guess is <0.1% or 0.01% :p, so why bother.

I'm pretty sure that even DX9 will be able to "emulate" almost all of the DX11 effects of the first wave of DX11 titles, yes.

Is "checkbox" DX11 and "emulate" DX11 still considered fake DX11 when it gains the DX11 performance increase estimates by AMD?
 
I just though about the performance / price ratio of the DX11 400$ part (5870) and things doesn't look too well for the lower parts (I mean 5770 or 5670, or whatever these names will be)

Lets take 48XX.
4870/4850 was launched in Q2 2008 at $300/$200.Of cource the better performance / price ratio belongs to the 4850.

In Q3 2008 the 4850 was at 180$, at this time ATI launched the 4670 at 80$.
4670 was more than half the speed of 4850 (let's say that the weighted average was 1,8X, even at 1920X1200 4X AA 16X AF where the 4670 has a disadvantage this was a high indicative weighted average)

So the 4670 had +25% better performance / price ratio than the recently launched 4850.

I am comparing parts that launched at the same time or with less than a quarter difference...

In Q2 2009 ATI launched the 850MHz 4890 at 250$ and the 4770 at 110$ (if the 40nm 4770 yields was not so bad maybe ATI will have priced the 4770 in the psychologically right price of 99$, but anyway)

The 850MHz 4890 was around 1,75X faster than 4770 (again weighted average of many games at 1920X1200 4X AA 16X AF, and at this res the 512MB/1GB difference is really making a difference for some games, but anyway the comparison is fair)

So again the 4770 had +30% better price performance ratio than the recently launched 4890.

Again I am comparing parts that launched at the same time or with less than a quarter difference
(it makes no sense to compare 4850 April 2009 performance / price ratio because it was a Q2 2008 GPU and in April 2009 the yields was excellent, now the new 58XX/57XX (or 56XX) parts will released within a quarter difference i think)

Even with Nvidia this is true (compare Q1 2009 launch GTS250 1GB with GTX275 or GTX285 performance price ratio...)

So what i am trying to say is that if the 5770 (or 5670, i don't know how ATI will name them) parts have 16ROPs and 128bit memory bandwidth how are they gonna have +25%/+30% better performance /price ratio than 5870?

Let's take a very good scenario:

5770 can achieve the same 850MHz as 5870 and has 1,2GHz GDDR5 as 5870.
Let's say that it has better SP/ROP ratio than 5870 , let's say it has 16ROPs/48TUs/960SPs. (and a monster of efficient memory controller...)

So 5870 will be around 1,7X the speed of 5770.
Even at this scenario ATI must launch 5770 at 180$ to maintain +30% better performance / price ratio...

And i don't think that the above scenario has many chances...

Even if the 5770 is 850MHz core /1,2Ghz GDDR5 with 16ROPs/40TUs/800SPs, ATI must price this part at 150$ to maintain +30% better performance / price ratio...

I don't think there is a possibility to return to ATI X1600XT (169$) and NV 8600GTS (199$) performance / price ratios but also it seems to me difficult for ATI to maintain this +30% trend...

I just want to clarify that the weighted average performance differencies that i took is higher than i think (if we take lower than 1,8X or 1,75X the performance /price ratio difference will be higher than +25%/+30%...)

So in this context, i think that the probabilities is for ATI to offer something less (regarding the performance / price ratio that we have the last 1,5 year...) to their 5770 (or 5760) customers....
 
See what XFX got about Eyefinity

The video part is kind of amusing... :???:

NOW I remember what that reminded me of:

Veridian Dynamics (from Better off Ted)
[YT]<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/ze-4HKCSpFI&color1=0xb1b1b1&color2=0xcfcfcf&feature=player_embedded&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowScriptAccess" value="always"></param><embed src="http://www.youtube.com/v/ze-4HKCSpFI&color1=0xb1b1b1&color2=0xcfcfcf&feature=player_embedded&fs=1" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="425" height="344"></embed></object>[/YT]
 
Last edited by a moderator:
So in this context, i think that the probabilities is for ATI to offer something less (regarding the performance / price ratio that we have the last 1,5 year...) to their 5770 (or 5760) customers....

If it's twice the performance for less than twice the price, how can it be less?

Anyhow, since when price/performance differences are set in stone? Don't forget they have to fit the new products withing the current lineup, as they wouldn't want to cannibalise their 4xxx series just yet...
 
NOW I remember what that reminded me of:

Veridian Dynamics (from Better off Ted)
[YT]<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/ze-4HKCSpFI&color1=0xb1b1b1&color2=0xcfcfcf&feature=player_embedded&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowScriptAccess" value="always"></param><embed src="http://www.youtube.com/v/ze-4HKCSpFI&color1=0xb1b1b1&color2=0xcfcfcf&feature=player_embedded&fs=1" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="425" height="344"></embed></object>[/YT]

http://www.youtube.com/watch?v=Ia8OKMlqxLs
[YT]http://www.youtube.com/watch?v=Ia8OKMlqxLs[/YT]

It made me think of the same thing :D
 
Except silly stuff like tessalization and compute shaders. :p
Tesselation can be used in a way that you won't even notice it or will need to know where to look to notice it. Radeons had tesselation for 2,5 years. What makes you think that everybody will start using it now?
Compute shaders are officially available on DX10 class h/w and are essentially nothing more than CUDA but this time -- from Microsoft not NVIDIA.

Is "checkbox" DX11 and "emulate" DX11 still considered fake DX11 when it gains the DX11 performance increase estimates by AMD?
Who said anything about fake? But I can repeat: even in the sense of performance increases PhysX will be (and probably already are) ahead of DX11.
I'm just saying that it's damn funny to see the same people who bash PhysX all the time be so excited about DX11 _from one vendor_.
 
Back
Top