AMD Radeon HD 6000M Series Laptop GPUs Launched

6400M - 160 SPs/8 TMUs/16:4 ROPs - Interesting but Jesus it's on a 64 bit bus.........(but available with DDR3 or GDDR5).
64bit bus should be "enough" if that's gddr5 - for comparison Barts has 7 times the SIMDs, 8 times the ROPs but only 4 times the bandwidth.
In theory it shouldn't really require a whole lot more bandwidth than Cedar, since it's got the same amount of tmus and rops, "only" alus (which don't really require memory bandwidth) have increased.

It's a performance gap AMD should've filled a while back in the 3xxx series, or at least had a 160 SP part as the lowest end for desktop parts when the 5xxx series arrived. I'm very interested to see how it does in it's GDDR5 form.
I agree Cedar felt a bit anemic - even more so since it was a tiny bit slower than HD4550 (at the same clocks, due to the interpolation being handled in alus, so it really could have needed more alus).
That said, look at the transistor count of Caicos compared to Cedar - looks like it's got a faster frontend too. In any case, unlike Cedar it should be a decent step up from intel HD 3000 graphics.

And yay for the return of 256 bit memory buses to AMD's high end mobile graphics, but only on the 6900M :(
Well, unlike some other company AMD doesn't make 100W TDP mobile chips, hence there's not really that much space for different 256bit chips...
 
Well, unlike some other company AMD doesn't make 100W TDP mobile chips, hence there's not really that much space for different 256bit chips...

This is from Notebookcheck:

The power consumption is specified with 100 Watt TDP including the MXM board and the 2 GB GDDR5. AMD usually specifies the TDP of the chip alone, therefore this value is not directly comparable.
 
I dont really see the point of the Radeon 6300M.
For me, the point is simple.
I know that AMD will support its drivers for a loooooong time.
With Intel, they have already said that they are not actively 4500MHD graphics any more.
So basically, Intel provides new drivers only for the last two generations. I'm buying a laptop that I want to last 5 years, I can't have a product whose drivers stop being supported in 2-3 years.
 
This is from Notebookcheck:
Where did you see that? I see no mention of TDP for HD6970M - only that it should be similar to GTX480M. I've got some doubts about this though, I don't think it should be over 75W. That is it _could_ be lower than GTX485M too, though the latter seems to be faster (makes sense, compared to their desktop brethren (HD 6850, GTX 460 1GB) both lose similar amounts of their frequencies, which would make them about as fast, but the GTX485M gets an additional SM).
I find the GTX485M a strange chip though - why is this not 500 series? Does it mean there will be a GF114 based 500 series chip? Or to the contrary, does it mean there won't be a GF114 cause nvidia now puts full GF104 into mobile chips?
 
For me, the point is simple.
I know that AMD will support its drivers for a loooooong time.
With Intel, they have already said that they are not actively 4500MHD graphics any more.
It's simply a mistake to buy Intel IGPs for games. However, the drivers don't need to be updated for plain Windows duties and HD video. So I think an Intel IGP is fine for everything except games.

You really don't want to try to game on any IGP because they are slower than the slowest discrete boards. They also tend to have somewhat unstable performance because of how they have to share system memory.

The low end discrete chips are particularly bleh because not only can they barely play even older games but they also suck down battery more than an IGP. They usually come attached to mobile chipsets with a IGP on die and so you end up powering a disabled IGP too I think.

So I think that if you want to game, start at the midrange discrete GPUs. If you aren't going to game be happy with the power savings and lower heat output of any IGP.
 
Last edited by a moderator:
So I think that if you want to game, start at the midrange discrete GPUs. If you aren't going to game be happy with the power savings and lower heat output of any IGP.
I don't want to game. Well, okay, I do :D but the primary reason is that I know that Photoshop CS6 will probably work with a HD6300M, while it will not work with 4500MHD graphics. You can replace Photoshop with any other GPU-accelerated application, for example, Adobe Flash.
 
Adobe Flash's acceleration has caused BSODs with my Mobility 5870. Fun.

The hardware acceleration is a good point, but I'm still not sure what kind of gains you can expect from IGP-level hardware for that kind of thing. HD video sure but anything that uses the more general shader units is probably not going to be very fast.
 
64bit bus should be "enough" if that's gddr5
That's what I figure, but I have a feeling the GDDR5 version will be a rarity much like the Mobility Radeon HD 5750.

I agree Cedar felt a bit anemic - even more so since it was a tiny bit slower than HD4550 (at the same clocks, due to the interpolation being handled in alus, so it really could have needed more alus). That said, look at the transistor count of Caicos compared to Cedar - looks like it's got a faster frontend too. In any case, unlike Cedar it should be a decent step up from intel HD 3000 graphics.
I always felt there was a gap between the Radeon 2600 and 2900 and especially between 36xx and 38xx series that would've been better filled with a 160 or 200 part in place of a 120 sp GPU. It could've been re-issued in place of the 43/45xx as well as opposed to developing RV710, as it was not very necessary when there was 780G, etc to take care of the low end. The 2600 and 36xx were underwhelming products at an improper area of performance.
 
Last edited by a moderator:
Well you know for those low end chips the main goals are 1) cheap 2) all features. On 55nm and 80nm cheap equaled the nasty 3450 / 2400. The 2600 and 3650 were awful too, primarily because of the R600/RV670-era big & inefficient tech. GeForce 8400, 8500, 8600, 9400 and 9500 weren't exactly great either though.

Most of these cards are actually reminiscent of the GFFX 5200. A card with all DX9 features but you sure as hell didn't want to try to use them. Likewise, you don't want to try almost any DX10 game on any of these low-end modern GPUs. The 3450 that I had in a notebook could barely play KOTOR well.
 
Considering the 8600GT was a big product when it launched for the more mid range GPU market, I think a 160 SP Radeon would've dominated it pretty well. I'm just saying that base part configuration could've been extended into DX10.1 and DX11. It's a nice performance point that probably could've garnered many consumers who wanted more than the low end, without having to jump all the way up to the Radeon 29xx, 38xx or 46xx when they all came around. I think the low end GPU market is way too crowded at the moment, especially as Nvidia starts two "new" lines a year. I think simplifying the product lines, would increase the incentive for laptop and graphics card makers to choose more of a specific product, increasing the need for that one GPU, therefore making up for the "lost" sales of having 3 or 4 products in a certain segment. That in turn would make the end product cheaper to the consumer.

For example (theoreticals here :p):

40 SP GPU - $40
80 SP GPU - $50
160 SP GPU - $70
400 SP GPU - $90

You could cut out the 40 and 80 SP line of GPUs to increase sales of the 160 SP, lowering it's cost to let's say $55. That 160 SP GPU can be extended into IGPs and the "low end" of the dedicated graphics market of which the IGP area is a massive segment where the $55 160 SP part will have similar sales as to the 40 SP part. Reviews and consumers will fall in love with the high performance and sales will be as good as the 40 and 80 SP parts combined, AMD will have a new winner on their hands like 780G was back in the day that is very high performance for the low end. The $70 or $75 segment can be reserved for the actual graphics card version of the 160 SP part.

Yes, the guy who wants to only spend $40 may be left in the dust without his 40 SP part, but for $55 he could be getting a 160 SP part that will do more for him, especially when that 40 SP part might not have handled what he wanted to do very well in the first place. He'll be enamored at the performance possibly (assuming his needs were meant) and a new loyal customer could be at hand. I'm not the best at economics and consumer trends here, but I think it's a viable part of the game Nvida and AMD are always playing. Of course, things will change with Fusion on how AMD markets graphics cards.

780G and it's successors are comparative dinosaurs that have been clearly surpassed by the 9400M, 320M from Nvidia and Sandy Bridge.
 
780G is just a 40 SP GPU. It's slower than a discrete 3450. Essentially it IS a 3450 but with both considerably less core clock and bandwidth. It performs quite badly if you have an Athlon 64 installed instead of a Phenom-class CPU because of the nearly halved HT bandwidth, and this was common back then. By far the most exciting part of this IGP was the H.264 & VC-1 hardware support but that was all that was really usefully better than say 690G.

The thing is that the whole point of these low end things is cheapness because the OEMs know that the majority of their customers won't notice a pathetic slow little GPU but they will notice missing features and potentially lose a sale. The ability to use the features isn't really a concern because these customers don't know what they need and don't know that the performance is so pathetic as to be worthless. So basically all that matters is that the features are there and the cost is very low.

Another thing is that I don't know if a 160SP GPU at 55nm would have worked in some of the applications that the 40SP 34xx was used in. For example, I had a 12" subnote that used a discrete 3450. The heat there was already quite noticeable. A Radeon 3600 class chip may not have been feasible.
 
Last edited by a moderator:
780G is just a 40 SP GPU. It's slower than a discrete 3450. Essentially it IS a 3450 but with both considerably less core clock and bandwidth. It performs quite badly if you have an Athlon 64 installed instead of a Phenom-class CPU because of the nearly halved HT bandwidth, and this was common back then. By far the most exciting part of this IGP was the H.264 & VC-1 hardware support but that was all that was really usefully better than say 690G.

The thing is that the whole point of these low end things is cheapness because the OEMs know that the majority of their customers won't notice a pathetic slow little GPU but they will notice missing features and potentially lose a sale. The ability to use the features isn't really a concern because these customers don't know what they need and don't know that the performance is so pathetic as to be worthless. So basically all that matters is that the features are there and the cost is very low.

Another thing is that I don't know if a 160SP GPU at 55nm would have worked in some of the applications that the 40SP 34xx was used in. For example, I had a 12" subnote that used a discrete 3450. The heat there was already quite noticeable. A Radeon 3600 class chip may not have been feasible.

Well I meant that the 160 SP design/level of performance could be reused as it's process was brought down. It could've began life during the 2000 series with a 128 bit interface, and by now could've come down to 40 nm with a 64 bit interface with DX11 capability added.

Look at the 29xx, 38xx, 46xx, and 55/56xx level of GPUs. While yes, much of it had to do with the memory interfaces and speeds of the chips, but as far as relative level of performance goes, we went from a graphics card using 200W to 110W to about 60W, and Redwood is a good deal more capable than R600. The new 65xxM series comes in at just below 30W TDP with the 6370M listed as 11, and Caicos should be around 20W (depending on speed, etc), which is the same as the old 2400 Pro on 65 nm and 34xx on 55 nm, and up to 6 times faster than either one. The 4550 on 55 nm was 25W. But I guess there is no point in beating a dead horse. I would like to see the 6400M become the new staple for low end dedicated graphics, but the GDDR5 version will be rare just like with any 5xxx and other 6xxxs that have GDDR5 as an option but available with DDR3 (ew!).

A new Sandy Bridge i5 + 6400M in a 14" form factor would make a nice replacement for my Vaio (i3 + 310M) though, however I don't need it :p
 
Last edited by a moderator:
I think HD 6400M / Caicos (well sure with a different name...) could be what will get used for Krishna/Wichita, that would be quite nice.
 
I think HD 6400M / Caicos (well sure with a different name...) could be what will get used for Krishna/Wichita, that would be quite nice.

Agreed. With only ~30% more transistors, die area and power consumption-per-clock would be noticeably lower than Ontario's/Zacate's IGP. That would allow for either lower power consumption or higher clocks, maybe even both.

Together with the doubled & improved Bobcat cores, I wouldn't be surprised if Krishna/Wichita turns out to be a much better product than Ontario/Zacate.
 
Agreed. With only ~30% more transistors, die area and power consumption-per-clock would be noticeably lower than Ontario's/Zacate's IGP. That would allow for either lower power consumption or higher clocks, maybe even both.
It's actually ~50% more transistors (surprisingly high number indeed). Still, die area will be smaller. I would expect similar power draw however at similar clocks.

Together with the doubled & improved Bobcat cores, I wouldn't be surprised if Krishna/Wichita turns out to be a much better product than Ontario/Zacate.
For sure but that's an unfair comparison.
My speculation (wish list?) for Krishna looks like this:
- Caicos integrated graphics.
- Single channel DDR3-1600 (or even more if that gets mainstream) support.
- Die size very similar to Zacate.
- maybe Turbo-like functionality for the cpu cores (maybe gpu too). Might not be able to reach the single-thread performance of Zacate otherwise for the 4 core version in the same power envelope (and having versions with less cores but more clock is something everybody wants to avoid).
- no idea on cpu core improvements other than better sse units. For that could have 128bit instead of 64bit units (just about the only area atom can be faster) and support newer SSE versions (potentially with half-speed AVX support even). I agree though beefing up sse units goes a bit against the idea of the APU (a problem atom doesn't have).
 
Back
Top