AMD readies a new flagship desktop processor

B3D News

Beyond3D News
Regular
AMD's current high-end offering in the desktop space, the Athlon 64 X2 6000+ (Windsor core, 90nm, 3GHz, 2MB L2 cache), is expected to hand over its flagship title to a new variant of the Athlon 64 X2 family, the Athlon X2 6400+. This new part has the mission to take on the new
high-end dual-core part from Intel, the Core 2 Duo E6850.

Read the full news item
 
AMD's current high-end offering in the desktop space, the Athlon 64 X2 6000+ (Windsor core, 90nm, 3GHz, 2MB L2 cache), is expected to hand over its flagship title to a new variant of the Athlon 64 X2 family, the Athlon X2 6400+. This new part has the mission to take on the new
high-end dual-core part from Intel, the Core 2 Duo E6850.

Read the full news item

Its nice to see AMD still trying but a 200Mhz speedbump isn't going to do much against Intel at the moment. This might just push it past the E6600 but I don't see it seriously challenging the E6750, nevermind the E6850.

Now if they could get it running at 3.8Ghz, we might have some serious competition! For a couple of months until Penryn arrives anyway!
 
Heh, interesting to see AMD playing the Mhz game...

I'd like to say they should learn from their competition and to run away from a Mhz focus -- but I don't really think they have any other option right now. Unless they've got a whole new architecture sitting somewhere in a corner that they are being REALLY quiet about...
 
Repeat of the current AMD/ATI theme, Too little too late
 
I wonder why they're even bothering...
Rather disappointing to see them keep pushing 90nm as well... It would be nice to see them transition to 65nm entirely.
 
I wonder why they're even bothering...
Rather disappointing to see them keep pushing 90nm as well... It would be nice to see them transition to 65nm entirely.

I agree, its amazing that AMD has not done this since they are always promoting that they are efficient.

A proper transition to 65nm and we could see a very efficient Athlon 64 X2 2MB L2 cache hitting 3.8Ghz easily while keeping thermals efficient.
 
A brand-new (to them) lithography process isn't going to instantly net them another 25%+ in clock headroom, especially if they're simultaenously adding another multiple-millions of transistors for cache. I'm not saying they couldn't get there, but it wouldn't just pop out.

Moving to a fully new process requires the entire chip to be rerouted and re-layed out. A 90 -> 65nm change is not a simple process shrink like 90 -> 80nm node change would be.
 
It's a sign of just how hard process transitions are getting for AMD.

If Phenom is any more late, AMD's top clocking chips will be two process nodes behind the top-clocked Penryn derivatives.

65nm seems subject to pretty significant process variation, and it seems the top clocks are proving especially hard to reach on anything but the very mature 90nm.
 
If Phenom is any more late, AMD's top clocking chips will be two process nodes behind the top-clocked Penryn derivatives.

You know, I didn't even think of it that way until you mentioned it... Intel is already providing full-speed samples of Penryn quad-cores at 45nm, and AMD hasn't even been able to get a working 65nm die as a tech demo.

How the hell can you even try to be price, performance OR power competitive with someone who's two full process nodes ahead of you? They can make faster, more complex and more power-efficient processor and still spend less for the silicon to build.
 
Nitpick: language

News item said:
<...> the chip will run at 3.2GHz and support 2MB of L2 cache <...>
No. It will have.
If it supported this and that, it would mean that it could be coupled with it, optionally or variably. L2 cache support of up to 8MB is something one can validly claim of the "Argon" Athlon core of yore.
We're talking about a fully assembled product here though, not about one element that can be coupled with others (making "support" a valid concern). The cache and the core are elements of the same die even, so such configurations (as with Slot A Athlons) can't even exist.

This misuse of support is not new. I figure it originated from the use of "sports" (as a substitute for "has") plus a little carelessness. Taiwanese mainboard manufacturers seem to have embraced it fully now, as in an ATX mainboard "supporting" five PCI slots, while they are actually soldered onto the board, and what the board does support is not the five slots, but five PCI cards.
 
Or a language issue, given Farid's not a native English speaker. Sorry it passed the edit phase and made you spend all those key presses letting us know :p
 
That's cool, my key-presses are cheap and abundant!
It's a somewhat widespread phenomenon, and such things can easily slip through the cracks. I'm just happy that you agree that it's not correctly used here. I might have used less key-presses if I had known that :p
 
Why, they're running a business. They won't just sit back and watch the market move on (despite the less successful period as of late).

Certainly. However, I have to question how much revenue they stand to generate from a part which will most assuredly ship in very small volumes, and with very low margins.
 
According to an analysis done here, it is because 65nm is more leaky than 90nm, particularly with their SOI process. It certainly seems to make sense.

Copied for your convenience, the gist of the article is that:

"# the power consumed by various AMD chips was all over the place, with the lower-end chips chewing up much more power in idle than the faster ones

# the degree of variation was even worse with 65nm chips than with 90s; the high-end 65s are better than the high-end 90s, but the low-end 65s are worse than the low-end 90s."


Which lead to this conclusion:

""It looks to me like AMD splits 65nm parts into three buckets: let's call them low, medium, and high leakage.

1. The low leakage parts are used for the 4800+ and 5000+ products. They are lower leakage than AMD's best 90nm parts. This is good news. They draw about 8% less current at 1.1V, and about 23-28% less current at 1.35V.

2. The medium leakage parts are used for the 4000+ and 4400+ products. They are somewhat as high in leakage than AMD's leakiest parts on 90nm, and definitely leakier than their 90nm median parts. The 4000+ part, for example, draws more current at '1.1V than AMD's downbinned 3800+ part on 90nm. At 1.325V, they are drawing more current than AMD's high end 90nm parts at 1.35V. This is certainly not good news, and suggest that the median of AMD's 65nm process leakage is worse off than at 90nm.

3. The high leakage parts are downbinned to the 3600+ chip. Although this part has been removed from the current lineup, it's not clear whether they are still producing these and selling them as 4000+ parts, or whether their process has improved. At any rate, these parts are insanely leaky. A 1.1V, they are drawing almost 50% more current than AMD's worst 90nm part. And good thing AMD restricted the voltage to 1.3V, because even at this voltage, the leakage towers over the entire 90nm product line.

I think these results are pretty interesting, and may explain why AMD has not been able to ramp 65nm. The leakage is killing them, and only their lowest leaking parts are able to hit 2.6GHz at 1.35V, and still maintain a reasonable power envelope.
"

Seems like an interesting assessment.
 
That's cool, my key-presses are cheap and abundant!

You actually mean free and abundant, there is no cost to you in the physical action of pressing a key.

If you still claim they are not free can you quantify why they fall into the cheap bracket ?
 
There's a physical energy cost to pay where Rolf has to use motor function to engage the muscle and press (and depress) the key, and more than likely also move his eyeballs to determine the correct visual response. At the very least.

Maybe we can buy him some sweets to replace the energy spent :cool:
 
You actually mean free and abundant, there is no cost to you in the physical action of pressing a key.

If you still claim they are not free can you quantify why they fall into the cheap bracket ?
The cost is the time he could have spent doing something else /economist.
 
I called AMD today to find out if there was anything they could do about the themal sensor on my Brisbane going bananas. They blew me off, and said that there was absolutely no issues at all with Brisbane temperature reporting. It looks like by the time they come out with a new desktop processor, I might not even care about it. If my CPU ends up in flames because I have no idea how hot it is running (right now it is sitting at 16C - in a 74F ambient temperature room!), it is going to suck. I've been annoyed by this issue for months; abit blames AMD (with good reason because it is happening on ASUS motherboards, Biostar MBs, etc.) and AMD blames the motherboard. I updated the BIOS again today with the new one, and no dice. :devilish:
 
Back
Top