AMD: Sea Islands R1100 (8*** series) Speculation/ Rumour Thread

Actually, Kepler cards seem to be doing fine even with three monitors: http://techreport.com/review/22922/nvidia-geforce-gtx-670-graphics-card/7
I admit I wasn't very clear when I said "even with different timings". nvidia can run 2 monitors with low idle power for a while, as long as they've got the same timings. I think they just extended that to 3 monitors (most likely they just run off the same clock source synchronized?).
They still cannot run 3 monitors with different timings however with low idle power (but can do so with 2 now) it seems:
http://ht4u.net/reviews/2012/nvidia..._gtx660_sc_msi_n660gtx_tf_oc_test/index15.php
- at least that's my interpretation this article didn't state if the 3-monitor case which couldn't run at low idle power also had different timings (only states it works with 2 monitors even if they have different timings).
Though other explanations are possible (the link you quoted is for a gtx670 it might be possible it always works with 3 monitors independent of timings with that card but not on a gtx660 or it might even be driver dependent, or if this really uses some cache it might even be resolution dependent).
But in any case it looks like nvidia have at least partly solved that problem and it would be nice if amd would do so too.

btw for the flickering it could probably be quite irritating. Not if you start some full-screen game but these days a lot more things can cause 3d activity (webGL enabled browsers, even things like just aero _might_ potentially cause gpus to clock up for brief periods etc.).
 
Last edited by a moderator:
Doesn't seem to be working all that well here (read at all), with a meager 2 monitors and no enumerated sound...Maybe ATI doesn't like Windows 8 yet or something!
Last I've read about such problems it seemed to depend on the connectors used (IIRC hdmi and DP would be problematic for some reason maybe just because they _could_ support audio).
But yeah I guess there could be OS-dependent bugs too.
On the desktop, proper multimon reclocking would probably add more value than ZeroCore power anyway well I guess there's always hope.
 
There might be some new HSA features (not sure what the roadmap is) and perhaps new AA modes, since they tend to add those every now and then, but beyond that I can't see anything obvious missing from GCN, apart from NVIDIA's adaptive V-Sync, maybe. Oh, apparently AMD's idle power is really bad with multiple displays, for some reason, so improving this would be most welcome.

And of course, additional power-efficiency and performance is always nice. Their Turbo could probably stand to be more aggressive too.

Adaptive vsync-like features will be added at the next version of Radeon Pro, although we don't know how well it will fair. As was already mentioned though, this is a driver feature, but yeah, I'd very much like to see it added in Catalyst.

One more thing that AMD could fix, is the high power consumption while in video acceleration mode.

http://www.techpowerup.com/reviews/ASUS/HD_7750/24.html

The 7750 overall consumes less than half the power of the GTS 450, but still the 450 consumes less than the 7750 in bluray playback.

Personally I don't really care about that, since I always disable every bit of video acceleration and browser/flash acceleration, but many people may want to see this addressed. The higher end models, can produce some hefty wattage spikes even for watching a Youtube video.
 
Higher power consumption is a result of shader post-processing (~ higher quality). Since additional processing needs non-zero amount of power, I have no idea how could AMD "fix" it.
 
Adaptive vsync-like features will be added at the next version of Radeon Pro, although we don't know how well it will fair. As was already mentioned though, this is a driver feature, but yeah, I'd very much like to see it added in Catalyst.

One more thing that AMD could fix, is the high power consumption while in video acceleration mode.

http://www.techpowerup.com/reviews/ASUS/HD_7750/24.html

The 7750 overall consumes less than half the power of the GTS 450, but still the 450 consumes less than the 7750 in bluray playback.


Personally I don't really care about that, since I always disable every bit of video acceleration and browser/flash acceleration, but many people may want to see this addressed. The higher end models, can produce some hefty wattage spikes even for watching a Youtube video.

This is the Asus HD7750 1Gb, who is a little bit faster ( 100mhz on the memory and 20mhz on the core with a small voltage increase ( really little ).. the difference in reality is really small, we speak about 3W in BR mode maybe.. ( and well there's maybe a 10% performance difference between both, pcb, cards are not make the same )

Dont forget the CCC automatically apply by default some hard video settings for video playback if accelerated, (mosquito, denoise etc etc ), when the nvidia default use less ( at least for old series ) ..
I remember some tests with the defaut settings and without the video filters they was a little difference. Then again, more filters you apply, more watts it will consume... About internet video, there's so many different things who can enter in count, browser accelerated or not, video ( flash ) accelerated or not. And ofc the browser used.. ( Firefox have lately some strange problem with flash )

About adaptative v-sync... well i dont know..
 
Last edited by a moderator:
Can this be right for the 8800 series?
http://videocardz.com/34981/amd-radeon-hd-8870-and-hd-8850-specifiation-leaked
That's a pretty big jump in computing performance and texture fill rate (which seems odd). Not much more on pixel fill rate. And now they have a base and boost clock rate...is that right?

The die went from 212mm2 to 270mm2 for 600 million more transistors...from that chart.

Edit:
Ok, I'm just realizing something similar was posted a few days ago...
 
Last edited by a moderator:
Anyway, Read2ch was right the last time for Tahiti it seems ( personnaly i dont remember it ), so there's maybe some true there.
 
Last edited by a moderator:
Higher power consumption is a result of shader post-processing (~ higher quality). Since additional processing needs non-zero amount of power, I have no idea how could AMD "fix" it.

I do own both 5850 CFX and 570 SLI and I have done my fair share of gpu assisted video decodings and I cannot see any quality difference whatsoever.

Why would TPU do bluray tests, with the cards not doing the same amount of work?

I believe the problem lies within the clocks used for the gpu acceleration. They are too high. Nvidia keeps their clocks at a much lower state, when working in the same mode.
 
GK104. The big chip comes later.

Nope, GK104 is the highend of the generation for nVidia. GK110 is refresh generation (just like GF110 was)
AMD on the other hand did release Barts before Cayman, but that could be for the fact that Cayman most likely got delayed for changing processes which Barts didn't go through.

But why is anyone saying "midrange comes first" anyway? Because some site decided to release specs which may or may not be anywhere near the real specs for "midrange" 8800's before 8900's?
 
GK104 is marketed as highend, but from a technical point of view it is not the highend part of the Kepler family.
This time around you just cannot label things like we used to. A refresh would indicate that there is something to refresh. To my knowledge, there never has been a GK100 (it was most certainly not released :D). So therefore, GK110 is no refresh. GK114 would be one for example.
 
GK104 is marketed as highend, but from a technical point of view it is not the highend part of the Kepler family.
This time around you just cannot label things like we used to. A refresh would indicate that there is something to refresh. To my knowledge, there never has been a GK100 (it was most certainly not released :D). So therefore, GK110 is no refresh. GK114 would be one for example.

GK104 is from any point of view the highend of Kepler family, until Kepler refreshes arrive.
GK100 was certainly planned, but obviously canned, there's no other explanations for GK110 name for the forthcoming chip.
 
No it is not. You're thinking in old ways that are not working anymore for Nvidia.
Was it? Certainly? You cannot say that. Nvidia would be stupid to have planned a large unmanufacturable die for the first part of their Kepler generation. Especially after Fermi.

As Ailuros said somewhere, the x1x moniker comes from the additional capabilities of GK110's shader units, not from it being a refresh.
 
No it is not. You're thinking in old ways that are not working anymore for Nvidia.
Was it? Certainly? You cannot say that. Nvidia would be stupid to have planned a large unmanufacturable die for the first part of their Kepler generation. Especially after Fermi.

As Ailuros said somewhere, the x1x moniker comes from the additional capabilities of GK110's shader units, not from it being a refresh.

Yes, just like in GF11x.. oh wait, GF10x already had them, excluding GF100 (or was it even shader units? might have been TU's too)

Or are you suggesting all Kepler refreshes in GTX 7xx -series will be GPGPU monsters?
 
We don't know yet what the smaller chips will have as codenames.

Anyway, we're just having different opinions then, it happens. This has been discussed to death I presume.
 
Back
Top