Welcome, Unregistered.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Reply
Old 25-Mar-2012, 09:16   #276
Mianca
Member
 
Join Date: Aug 2010
Posts: 330
Default

Quote:
Originally Posted by silent_guy View Post
I love car metaphors because they enable getting ideas across in a very concise ways. (Sorry for quoting your whole post, but it was the easiest way to read it without those annoying colors that were added by some bot.)
Yeah, sorry, that's Mr. Colorson. He's a dear friend of mine. Keeps lurking right above my text fields when I type posts. I told him not to interfere with my texts and leave my metaphors alone. Many times. But he's such a cheeky rascal. Always finds a way to pull my leg.

Luckily, he isn't half as bad as Mr. Smiliespam.

Mianca is offline   Reply With Quote
Old 25-Mar-2012, 10:24   #277
CarstenS
Senior Member
 
Join Date: May 2002
Location: Germany
Posts: 2,973
Send a message via ICQ to CarstenS
Default

Quote:
Originally Posted by jimbo75 View Post
Hexus and Anandtech had samples over 1100 MHz in the games they saw. I haven't looked at the rest but I'll be pretty amazed to find they weren't performing over the "average". Tom's hit 1110 MHz in 2 games and a little below that in the rest - http://www.tomshardware.com/reviews/...rk,3161-3.html

I haven't seen any game below the "average" yet but I haven't looked that hard.

As for the rest, Ryan benched the boost at 1600p, where it's performance gains are likely being more limited. Of course the really interesting thing would be to have benched it at 1080p seeing as that is where the card is being positioned as unbeatable but you'll have to ask him why he didn't do that. I have my own theory of course.
You've got a point there - lower resolutions, when not running into a CPU limitations big time increase the sheer number of fps and thus oftentimes (at least in my experience) also increase power draw, thus heat, thus turbo-ability for GTX 680.

But then, there's also temperature as a factor as Ryan mentioned. As long as your card stays below 70 (°C i belieb he meant), your turbo will hit higher rates than above that. But that also implies, that better cooling will help your GPUs speed, doesn't it? Now, if Nvidia had adopted a very aggressive cooling solution turning up the noise at below that temperature threshold, I could believe they did that just in order to achieve marginal margins on some reviews.

Quote:
Originally Posted by Mianca View Post
What should they learn? Their design is still better.
If Pitcairn was a ~300mm² chip with some 6GHz memory at its disposal, it would win against GK104 in total performance, perf/mm² and perf/W.

If Pitcairn was a ~300mm² chip with some 6GHz memory at its disposal AND a power determinisitc auto overclocking switch that keeps the chip right at the 200W mark when under gaming load - it would just kick GK104 out of the stadium.
If I take Computerbase's ratings, GK104 is roughly 34 to 40 percent above Pitcairn, it's die size is 38,6% above - it'd be a close call I think, but probably nothing like one blowing the other out of some sports arena. And that's assuming area can be translated into performance on a 1:1 ratio. Extrapolated power budget headroom (114 watts measured for example in BF: BC2) to match GTX 680 would be ~16 watts above 1.386 x 114 watts (158 watts for a scaled Pitcairn).

For example: How large would memory controllers have to grow in order to get past AMDs binning at such speeds?
Quote:
Originally Posted by Mianca View Post
GCN is a very sound arcitecture - just as PowerTune is a very sound high performance power management feature.
True.
__________________
English is not my native tongue. Before flaming please consider the possiblity that I did not mean to say what you might have read from my posts.
Work| Recreation
Warning! This posting may contain unhealthy doses of gross humor, sarcastic remarks and exaggeration!
CarstenS is offline   Reply With Quote
Old 25-Mar-2012, 11:20   #278
Mianca
Member
 
Join Date: Aug 2010
Posts: 330
Default

Quote:
Originally Posted by CarstenS View Post
If I take Computerbase's ratings, GK104 is roughly 34 to 40 percent above Pitcairn, it's die size is 38,6% above - it'd be a close call I think, but probably nothing like one blowing the other out of some sports arena.
It's all personal guesstimation, of course, but in order to evaluate the basic architectural performance efficiency, I basically compared GTX680 results minus 5-10% boost speed (i.e. 1Ghz base clock) to the results of a 1Ghz HD7870@6000MHz mem clock - and extrapolated from there.

With respect to power, it's not even close. According to W1zzard (whose power measurements are as good as they get - directly measuring via PCI-Express power connectors and PCI-Express bus slot) GTX680 draws 60% more power on average than HD7870. So Perf/W is nearly 30% better on average without even making some adjustments.

As for scaling - just look at at the perf/W and perf/mm² scaling of HD7770 vs. HD7870. Without all the compute related stuff, GCN scales very well: As a matter of fact, HD7870 even has 6% better average perf/W than HD7770, while perf/mm² is only slightly worse (Pitcairn is about 72% bigger than Cape Verde - with HD7870 performing 66% faster than HD7770).

So, yes. Maybe I exaggerated a bit. But I'd still bet on a gaming optimized 300mm² GCN GPU to consistently beat GK104 by a fair margin - even without AMD implementing all that Boost stuff @ stock settings.

The fact that we're even discussing this is an impressive testament to NVidia's very good work with GK104, though - and who knows: If they can keep improving efficiency like that, Maxwell will be VERY hard to beat.
Mianca is offline   Reply With Quote
Old 25-Mar-2012, 11:35   #279
CarstenS
Senior Member
 
Join Date: May 2002
Location: Germany
Posts: 2,973
Send a message via ICQ to CarstenS
Default

Sorry to disappoint, but "average" is just Crysis 2:
"Average: Crysis 2 at 1920x1200, Extreme profile, representing a typical gaming power draw. Average of all readings (12 per second) while the benchmark was rendering (no title/loading screen)." (From your link)

The numbers I mentioned are also directly from the slot plus PSU connectors, so they're not guesstimates any more than what you've linked. Avg. Performance per watt correllating performance in many games and watts in just one is more a rough ballpark (albeit a very useful one!) than a number whose decimal's points I'd trust in for inter-arch comparisons.

Here's a few games with watts and fps at exactly the same spot:
http://www.pcgameshardware.de/aid,87.../Test/?page=19
Unfortunately we've only had time to do 680, 580 and 7970 here (for 7970 we've also used the reviewer sample provided by AMD). But nevertheless, the results are way less advantageous as Nvidia would like you to think. 680 is beating 580 handily though.
__________________
English is not my native tongue. Before flaming please consider the possiblity that I did not mean to say what you might have read from my posts.
Work| Recreation
Warning! This posting may contain unhealthy doses of gross humor, sarcastic remarks and exaggeration!
CarstenS is offline   Reply With Quote
Old 25-Mar-2012, 12:15   #280
jimbo75
Senior Member
 
Join Date: Jan 2010
Posts: 1,211
Default

Using Anands numbers on the 680 and 7870 power bench during OCCT...

They both idle at 112W, the 7870 draws 259W during OCCT and the 680 draws 333W. That's a difference of 221/147 = 50% higher for the 680.

I think TPU's perf/watt numbers are off however due to Crysis 2 scoring higher fps on the 7870 compared to the 680, which clearly isn't the case.

I'd guess the actual lead for the 7870 over the 680 in perf/watt is 10% -15% depending on the game. Some will be closer to 5% and some might be closer to 30% but the average will be around 15% I think.
jimbo75 is offline   Reply With Quote
Old 25-Mar-2012, 13:51   #281
STaR GaZeR
Registered
 
Join Date: Dec 2011
Posts: 5
Default

AMD has done wrong since the HD5000s IMO. With each generation the chips used for the HDx8xx SKUs have been smaller and smaller. They're now at 212 mm^2, selling it for $350. Pircairn is an awesome chip, much better than Tahiti, but it's way too small. Cypress had awesome perf/W and perf/mm^2, and it was "big". Barts was the same, but not quite there at 252 mm^2. Pircairn is king there too, but it's too small.

Make a ~300 mm^2 (or more) Pitcairn, sell it at non retarded prices, and you'll have a winner.
STaR GaZeR is offline   Reply With Quote
Old 25-Mar-2012, 22:08   #282
EduardoS
Member
 
Join Date: Nov 2008
Posts: 131
Default

Quote:
Originally Posted by Mianca View Post
But I'd still bet on a gaming optimized 300mm² GCN GPU to consistently beat GK104 by a fair margin - even without AMD implementing all that Boost stuff @ stock settings.
Sure you are right, but why comparing a hypothetical GPU to a real GPU? I'm not sure GTX680 is the perfect balance.

BTW, I don't think HD7870 is the perfect balance either, the problem Thaiti have Pitcairn have as well just more limited, and the most interesting topic about GCN to me still what is limiting it's gaming performance
EduardoS is offline   Reply With Quote
Old 26-Mar-2012, 04:19   #283
Ryan Smith
Member
 
Join Date: Mar 2010
Posts: 164
Default

Quote:
Originally Posted by silent_guy View Post
Actually, I would really like to hear that theory! Ryan probably too, since he's a reader here.
Sure thing. I had to pick something, so I picked the something that was the least likely to be CPU limited.
Ryan Smith is offline   Reply With Quote
Old 26-Mar-2012, 08:22   #284
Mianca
Member
 
Join Date: Aug 2010
Posts: 330
Default

Quote:
Originally Posted by STaR GaZeR View Post
AMD has done wrong since the HD5000s IMO. With each generation the chips used for the HDx8xx SKUs have been smaller and smaller. They're now at 212 mm^2, selling it for $350.
I think the main problem isn't that they didn't want to make a bigger high-midrange chip, but that average fab costs per mm² keep rising when comparing similar points of process maturity.

Wouldn't be surprised if a ~210mm² (28nm) Pictairn chip actually cost about as much to make right now as a ~340mm² (40nm) Cypress chip cost back in Oct. 2009 - so launch prices of the corresponding cards end up very similar though the chips are of different size.


That being said, a ~300mm² high-midrange chip @ 28nm should actually become way more financially feasible towards the end of this year. I think AMD is in a rather good position in that respect. The direct successor to GK104 will probably end up somewhere between 350-400mm² - so a 250-300mm² Pitcarin successor has more headroom for relative increase in die size.

The question really is: How far are they willing to go within that range - and what are Nvidia's plans? Slightly over ~250mm² Pitcairn successor vs. slightly under ~400mm² GK104 successor won't really change the current performance gap in any significant way. Slightly under ~300mm² Pitcairn successor vs. slightly bigger than 350mm² Gk104 successor would get really interesting, though.


Given the current circumstances, I'd probably just try to extend the average life cycle of my compute chips, though - and make room for bigger gaming chips every 18-24 months. Tahiti is really good at compute tasks and FireGL cards take a lot of time to validate anyway - so why bother with another compute-heavy chip in 2012?

Going down that BIG COMPUTE - MEDIUM GAMING - BIG GAMING - MEDIUM GAMING - BIG COMPUTE (20nm) - MEDIUM GAMING (20nm) road, a ~250mm² Pictairn successor PLUS a 350mm² gaming-optimized high end chip would be a really nice combo of medium and heavy punches. Next Tahiti-like multi-use chip would then be scheduled to precede BIG Maxwell somewhen in late 2013/early 2014. Wishful thinking?
Mianca is offline   Reply With Quote
Old 26-Mar-2012, 08:42   #285
EduardoS
Member
 
Join Date: Nov 2008
Posts: 131
Default

Quote:
Originally Posted by Mianca View Post
I think the main problem isn't that they didn't want to make a bigger high-midrange chip, but that average fab costs per mm² keep rising when comparing similar points of process maturity.

Wouldn't be surprised if a ~210mm² (28nm) Pictairn chip actually cost about as much to make right now as a ~340mm² (40nm) Cypress chip cost back in Oct. 2009 - so launch prices of the corresponding cards end up very similar though the chips are of different size.


That being said, a ~300mm² high-midrange chip @ 28nm should actually become way more financially feasible towards the end of this year. I think AMD is in a rather good position in that respect. The direct successor to GK104 will probably end up somewhere between 350-400mm² - so a 250-300mm² Pitcarin successor has more headroom for relative increase in die size.

The question really is: How far are they willing to go within that range - and what are Nvidia's plans? Slightly over ~250mm² Pitcairn successor vs. slightly under ~400mm² GK104 successor won't really change the current performance gap in any significant way. Slightly under ~300mm² Pitcairn successor vs. slightly bigger than 350mm² Gk104 successor would get really interesting, though.


Given the current circumstances, I'd probably just try to extend the average life cycle of my compute chips, though - and make room for bigger gaming chips every 18-24 months. Tahiti is really good at compute tasks and FireGL cards take a lot of time to validate anyway - so why bother with another compute-heavy chip in 2012?

Going down that BIG COMPUTE - MEDIUM GAMING - BIG GAMING - MEDIUM GAMING - BIG COMPUTE (20nm) - MEDIUM GAMING (20nm) road, a ~250mm² Pictairn successor PLUS a 350mm² gaming-optimized high end chip would be a really nice combo of medium and heavy punches. Next Tahiti-like multi-use chip would then be scheduled to precede BIG Maxwell somewhen in late 2013/early 2014. Wishful thinking?
You speak (I mean, write) as if what slow downs Thaiti in games is too expensive to fix¹ and doesn't affect compute performance².

1) Maybe, if it was easy I assume it would already be fixed, but I still believe this wasn't intentional.

2) I'm not sure if this is true there still so few GPU compute applications to check... Anyway, what limits gaming performance can't say how it will affect other applications.
EduardoS is offline   Reply With Quote
Old 26-Mar-2012, 16:15   #286
lanek
Senior Member
 
Join Date: Mar 2012
Location: Switzerland
Posts: 1,183
Default

Quote:
Originally Posted by CarstenS View Post
Sorry to disappoint, but "average" is just Crysis 2:
"Average: Crysis 2 at 1920x1200, Extreme profile, representing a typical gaming power draw. Average of all readings (12 per second) while the benchmark was rendering (no title/loading screen)." (From your link)

The numbers I mentioned are also directly from the slot plus PSU connectors, so they're not guesstimates any more than what you've linked. Avg. Performance per watt correllating performance in many games and watts in just one is more a rough ballpark (albeit a very useful one!) than a number whose decimal's points I'd trust in for inter-arch comparisons.

Here's a few games with watts and fps at exactly the same spot:
http://www.pcgameshardware.de/aid,87.../Test/?page=19
Unfortunately we've only had time to do 680, 580 and 7970 here (for 7970 we've also used the reviewer sample provided by AMD). But nevertheless, the results are way less advantageous as Nvidia would like you to think. 680 is beating 580 handily though.
Really interesting.
lanek is offline   Reply With Quote
Old 27-Mar-2012, 10:36   #287
Mianca
Member
 
Join Date: Aug 2010
Posts: 330
Default

Quote:
Originally Posted by CarstenS View Post
Sorry to disappoint, but "average" is just Crysis 2
Point taken - but your results still aren't all that different to the numbers I used, so why should I be disappointed?

Your numbers attest GTX680 about 7% better perf/W than HD7970, W1zzard says 4%.

What I'd be really interested in are some perf/W numbers for HD7870 based on your more refined test procedure - and maybe perf/W for HD7970 @ -20% Powertune settings.

If there's one thing I took away from the discussion in this thread, it's that PowerTune has received way too little review-love until now.


EDIT:

There's a rather interesting PowerTune test @behardware.com, btw.

So HD7970 basically still has a ~10% guardband in power budget to keep PowerTune from actually throttling games - which also explains why some people reported that even their overclocked cards weren't noticeably throttled @ stock PowerTune settings. There just is a relatively broad headroom to exploit.

Interesting things happen @ -20% power budget, though: Once it actually kicks in, PowerTune seems to adapt nicely and with good granuality to different levels of stress. I'd love to see corresponding clock rates and power draw readings over the benched period of time.

Looks like average performance decreases faster than average power consumption, though - i.e. average power efficiency goes down with (power budget based) clock throttling in that specific case. Maybe Tahiti's average Perf/W peak is achieved within a higher clock speed / power range?

Last edited by Mianca; 31-Mar-2012 at 21:52.
Mianca is offline   Reply With Quote
Old 27-Mar-2012, 11:59   #288
sheepdogexpress
Junior Member
 
Join Date: Mar 2012
Posts: 65
Default

Quote:
Originally Posted by CarstenS View Post
Sorry to disappoint, but "average" is just Crysis 2:
"Average: Crysis 2 at 1920x1200, Extreme profile, representing a typical gaming power draw. Average of all readings (12 per second) while the benchmark was rendering (no title/loading screen)." (From your link)

The numbers I mentioned are also directly from the slot plus PSU connectors, so they're not guesstimates any more than what you've linked. Avg. Performance per watt correllating performance in many games and watts in just one is more a rough ballpark (albeit a very useful one!) than a number whose decimal's points I'd trust in for inter-arch comparisons.

Here's a few games with watts and fps at exactly the same spot:
http://www.pcgameshardware.de/aid,87.../Test/?page=19
Unfortunately we've only had time to do 680, 580 and 7970 here (for 7970 we've also used the reviewer sample provided by AMD). But nevertheless, the results are way less advantageous as Nvidia would like you to think. 680 is beating 580 handily though.
Is it just me or is the only games the gtx 680 loses in performance per watt is metro 2033 for performance per watt. Out of 9 games, that is pretty good if you ask me.
sheepdogexpress is offline   Reply With Quote
Old 27-Mar-2012, 12:32   #289
lanek
Senior Member
 
Join Date: Mar 2012
Location: Switzerland
Posts: 1,183
Default

Quote:
Originally Posted by sheepdogexpress View Post
Is it just me or is the only games the gtx 680 loses in performance per watt is metro 2033 for performance per watt. Out of 9 games, that is pretty good if you ask me.
I think the point was just to show how close they was in PowerDraw, and far of the figure the spec give.

Performance are subject to change following setting and resolution + benchmark zone. A simple example with BF3, the 7970 is faster without FXAA, the fps drop of 50% with it, when the lost is half of it for the 680. It depend too where you test.

But again, if it win, the difference is not big. thanks to Metro, ME2 .. ( SC2 is a strange change case for dont say more )

Last edited by lanek; 27-Mar-2012 at 12:39.
lanek is offline   Reply With Quote
Old 28-Mar-2012, 06:23   #290
Mianca
Member
 
Join Date: Aug 2010
Posts: 330
Default

Quote:
Originally Posted by lanek View Post
I think the point was just to show how close they was in PowerDraw, and far of the figure the spec give.
Yeah, HD7970 is specced according to its max power draw in games (see my earlier post), GTX680 is specced according to its typical power target in games.

HD7970 is absolutely clock deterministic with hugely variable power draw in games (the games tested by CarstenS show a range from 139W to 182W - that's about 30% fluctuation).

GTX680 is relatively power deterministic with variable clock rates within a certain range - resulting in way less fluctuation in power draw (the games tested by CarstenS show a range from 156W to 174W - that's a fluctuation of under 12%).

A lot of review sites just measure peak power consumption - which just doesn't do a card whose power draw fluctuates a lot justice.

Also, notice that the numbers posted by CarstenS are average numbers - so they really just give fluctuation of average power draw across different games. Add fluctuation of power draw within games on top of that, and the range of power draw will become even bigger.

Last edited by Mianca; 28-Mar-2012 at 06:35.
Mianca is offline   Reply With Quote
Old 29-Mar-2012, 18:45   #291
xEx
Junior Member
 
Join Date: Feb 2012
Posts: 15
Default

Quote:
Originally Posted by Ryan Smith View Post
Sure thing. I had to pick something, so I picked the something that was the least likely to be CPU limited.
Then i´d like to ask you why you used the old drivers that came with the HD79xx? idk if you re-run the test or just put the numbers in any case im really wondering why you used a numbers that don't represent the current performance of the Radeons...And ironically who used the newest drivers they turn off the optimizations of the driver(Catalyst AI) which are the same as Nvidia just that Nvidia don't let you turn it off.

I don't want to fight a fight nor being a fanboy of anything i just want to know why you(as many other) chose that decision.
xEx is offline   Reply With Quote
Old 29-Mar-2012, 18:47   #292
xEx
Junior Member
 
Join Date: Feb 2012
Posts: 15
Default

Not be able to edit my own post really **** lol
xEx is offline   Reply With Quote
Old 06-Jun-2012, 12:38   #293
AnarchX
Senior Member
 
Join Date: Apr 2007
Posts: 1,505
Default

Sea Islands GPUs?
Quote:
AMD6600.1 = "MARS (6600)"
AMD6601.1 = "MARS (6601)"
AMD6602.1 = "MARS (6602)"
AMD6603.1 = "MARS (6603)"
AMD6606.1 = "MARS (6606)"
AMD6607.1 = "MARS (6607)"
AMD6620.1 = "MARS (6620)"
AMD6621.1 = "MARS (6621)"
AMD6623.1 = "MARS (6623)"
AMD6610.1 = "OLAND (6610)"
AMD6611.1 = "OLAND (6611)"
AMD6631.1 = "OLAND (6631)"
AMD682B.1 = "VENUS LE"
AMD6823.4 = "VENUS PRO"
AMD6821.1 = "VENUS XT"
AMD6820.2 = "VENUS XTX"
http://forums.guru3d.com/showpost.ph...5&postcount=14
AnarchX is offline   Reply With Quote
Old 06-Jun-2012, 13:19   #294
UniversalTruth
Former Member
 
Join Date: Sep 2010
Posts: 1,529
Default

Likely the mobile chips codenames.
UniversalTruth is offline   Reply With Quote
Old 06-Jun-2012, 14:13   #295
Kaotik
Drunk Member
 
Join Date: Apr 2003
Posts: 5,415
Send a message via ICQ to Kaotik
Default

Quote:
Originally Posted by UniversalTruth View Post
Likely the mobile chips codenames.
Mars & Venus are planets, Oland is island (in a sea, too), so mix of both I guess
__________________
I'm nothing but a shattered soul...
Been ravaged by the chaotic beauty...
Ruined by the unreal temptations...
I was betrayed by my own beliefs...
Kaotik is offline   Reply With Quote
Old 06-Jun-2012, 18:27   #296
eastmen
Senior Member
 
Join Date: Mar 2008
Posts: 6,299
Default

Any word on if this is hitting this year ?
eastmen is offline   Reply With Quote
Old 10-Jun-2012, 09:12   #297
rSkip
Registered
 
Join Date: Jan 2012
Location: Shanghai
Posts: 5
Default

Quote:
Originally Posted by Kaotik View Post
Mars & Venus are planets, Oland is island (in a sea, too), so mix of both I guess
They are also islands in Canada.
It seems that Venus is just a new name for Cape Verde (Mobile) and Mars & Oland are new chips.
rSkip is offline   Reply With Quote
Old 14-Jun-2012, 08:02   #298
Kaotik
Drunk Member
 
Join Date: Apr 2003
Posts: 5,415
Send a message via ICQ to Kaotik
Default

Quote:
Originally Posted by rSkip View Post
They are also islands in Canada.
It seems that Venus is just a new name for Cape Verde (Mobile) and Mars & Oland are new chips.
Does Canada have island named Sun too?
From 9.00 betas
Quote:
AMD6823.2 = "AMD Radeon HD 8800M"
AMD6823.3 = "AMD Radeon HD 8800M"
AMD682B.2 = "AMD Radeon HD 8800M"
AMD6823.4 = "AMD Radeon HD 8800M"
AMD6821.2 = "AMD Radeon HD 8800M"
AMD6820.2 = "AMD Radeon HD 8800M"
AMD682B.3 = "AMD Radeon HD 8800M"
AMD6823.5 = "AMD Radeon HD 8800M"
AMD6821.3 = "AMD Radeon HD 8800M"
AMD6820.3 = "AMD Radeon HD 8800M"
AMD682B.4 = "AMD Radeon HD 8800M"
AMD6823.6 = "AMD Radeon HD 8800M"
AMD6821.4 = "AMD Radeon HD 8800M"
AMD6820.4 = "AMD Radeon HD 8800M"
AMD6820.5 = "AMD Radeon HD 8800M"
AMD6821.5 = "AMD Radeon HD 8800M"
AMD6821.6 = "AMD Radeon HD 8800M"
AMD6820.6 = "AMD Radeon HD 8800M"
AMD6823.7 = "AMD Radeon HD 8800M Series"
AMD6820.7 = "AMD Radeon HD 8800M Series"
AMD6821.7 = "AMD Radeon HD 8800M Series"
AMD682B.5 = "AMD Radeon HD 8800M Series"
AMD6823.7 = "AMD Radeon HD 8800M Series"
AMD6820.7 = "AMD Radeon HD 8800M Series"
AMD6821.7 = "AMD Radeon HD 8800M Series"
AMD682B.5 = "AMD Radeon HD 8800M Series"
AMD6823.1 = "AMD Radeon HD 8800M"
AMD682B.1 = "AMD Radeon HD 8800M"
AMD6821.1 = "AMD Radeon HD 8800M"
AMD6820.1 = "AMD Radeon HD 8800M"
AMD6600.1 = "MARS (6600)"
AMD6601.1 = "MARS (6601)"
AMD6602.1 = "MARS (6602)"
AMD6603.1 = "MARS (6603)"
AMD6606.1 = "MARS (6606)"
AMD6607.1 = "MARS (6607)"
AMD6620.1 = "MARS (6620)"
AMD6621.1 = "MARS (6621)"
AMD6623.1 = "MARS (6623)"
AMD6610.1 = "OLAND (6610)"
AMD6611.1 = "OLAND (6611)"
AMD6631.1 = "OLAND (6631)"
AMD6660.1 = "SUN (6660)"
AMD6663.1 = "SUN (6663)"
AMD6667.1 = "SUN (6667)"
AMD666F.1 = "SUN (666F)"
+ apparently all the previously mentioned too
__________________
I'm nothing but a shattered soul...
Been ravaged by the chaotic beauty...
Ruined by the unreal temptations...
I was betrayed by my own beliefs...
Kaotik is offline   Reply With Quote
Old 14-Jun-2012, 08:09   #299
lanek
Senior Member
 
Join Date: Mar 2012
Location: Switzerland
Posts: 1,183
Default

8800M .... ok... this time i think we can say it is gpu mobile ( they all have the same string: 66xx and 68xx)

Last edited by lanek; 14-Jun-2012 at 08:15.
lanek is offline   Reply With Quote
Old 14-Jun-2012, 13:11   #300
AnarchX
Senior Member
 
Join Date: Apr 2007
Posts: 1,505
Default

AMD682x deviced IDs:
  • Venus
  • HD 8800M

So were are looking probably on a Cape Verde successor, which was base of HD 7800M/HD 7700M.
AnarchX is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 14:48.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.