AMD Bulldozer Core Patent Diagrams

Charlie is trying to save face after his constant cocksucking of AMD wasn't reciprocated by getting blown himself.

I mean, really. All he had to do was read that old Real World Tech article and he'd have known that BD was gonna disappoint compared to SB. That simple. He resorted to printing unreliable rumors that flew in the face of established fact, or made up shit. I don't know which, though I'd lean toward the former. Given his constant misunderstandings of tech both on his site and here, it just seems like the kind of thing that he'd do.
 
The old K10 wouldn't scale well in clock-rate and power efficiency like a new architecture, despite process shrining. A micro-architecture is usually developed to scale for two full nodes ahead. After that a significant re-design is required, not only to provide clock-rate scaling, but also implementation of new important features, which often leads to simply a new architecture, as a result. Intel's roadmap from the last five years shows this very streamlined development cycle, with gradual evolution ever since Conroe/Merom, together in a steady lock-step with the manufacturing process advancement. With SNB they even incorporated some long bashed NetBurst paradigms back into action (OK, Nehalem got the HT first). AMD, on the other hand, was much too long reliant on "salted" K8 increments and bumpy process transitions.
 
The old K10 wouldn't scale well in clock-rate and power efficiency like a new architecture, despite process shrining. A micro-architecture is usually developed to scale for two full nodes ahead. After that a significant re-design is required, not only to provide clock-rate scaling, but also implementation of new important features, which often leads to simply a new architecture, as a result. Intel's roadmap from the last five years shows this very streamlined development cycle, with gradual evolution ever since Conroe/Merom, together in a steady lock-step with the manufacturing process advancement. With SNB they even incorporated some long bashed NetBurst paradigms back into action (OK, Nehalem got the HT first). AMD, on the other hand, was much too long reliant on "salted" K8 increments and bumpy process transitions.

Just dropping Thuban down to 32nm would of produced a better peforming, more power efficient chip then what BD is.
 
So did they ignore the transistors in caches as Intel was supposed to do as well?

If any of them ignored cache transistors, the transistor counts would be 3/4 what they are.

They need to ask this question again from AMD and have them check their math, just in case AMD fired everybody who had the right answer.

The density figures for that are not good, and the scaling from 45nm is quite bad if true. It's way below other 32nm products from Intel and AMD.
 
Would of been much more competitive and cheaper then that piece of shit called BD.
Yeah, particulary in AVX and AES benchmarks. Then shrink it to 20nm, then 15nm etc.

Really, the launch of BD revealed so much skilled chip architects across the internet forums, it's unbelievable how AMD couldn't simply build an afford a bigger R&D teams and to build an Intel killer... :rolleyes:
 
The module transistor count was given as being 213 million.
With four, that's 852M.
8 MiB L3 *8 bits per byte*6 transistors per bit ~402M.

That's over 1.2B right there, and leaves nothing for anything else on the die. No L3 tags, no MC, no HT, nothing.
 
Yeah, particulary in AVX and AES benchmarks. Then shrink it to 20nm, then 15nm etc.

Really, the launch of BD revealed so much skilled chip architects across the internet forums, it's unbelievable how AMD couldn't simply build an afford a bigger R&D teams and to build an Intel killer... :rolleyes:

Yes because AVX is widespread and used every were....
 
Yes because AVX will be widespread and used every were at some point of time. You have to make the switch at somepoint, the earlier the better if your competitor already has it.

It has sub-par AVX performance compared to SB and worse general performance then Phenom 2.

No point in adding a feature if it degraded the chip as a whole.
 
But hey, in 4 years when your average app can actually scale half-decently over 8 cores it will be better than i5 so it's future-proof!
 
This forum needs sarcasm tags.

Though it was only half joking. I've seen tons of people saying that getting a BD is a great idea because it has tons of cores and is thus future proof.
 
Yeah, particulary in AVX and AES benchmarks. Then shrink it to 20nm, then 15nm etc.

Really, the launch of BD revealed so much skilled chip architects across the internet forums, it's unbelievable how AMD couldn't simply build an afford a bigger R&D teams and to build an Intel killer... :rolleyes:

Desktop Bulldozer gets owned by Phenom II. The new FX line is a line of turds.

Consumers don't have to be able to build a less shitty desktop processor than Bulldozer FX, they just have to be able to buy something better. And they can.
 
Wow, just wow. I'd expect such clueless posts on OCN or other benchmark wanking places, but here??

Worse than Phenom II? Don't stop there, say it's worse than a K6 too. :rolleyes:
 
perhaps the process is especially bad, it would suck less if it ran a bit faster and less power hungry. a year from now it will look better with incremental progress on process and architecture, and a windows 8 scheduler that is actually aware about cores and modules.
 
I think the scheduler tweaks are AMD skirting around the real truth of the CPU needing special attention due to bugs in the cache and such. I think the design is supposed to be less sensitive to scheduler problems otherwise they would have used the logical/physical CPU designations that the OS knows about for Intel HT.

Wow, just wow. I'd expect such clueless posts on OCN or other benchmark wanking places, but here??

Worse than Phenom II? Don't stop there, say it's worse than a K6 too. :rolleyes:

It might be worse than Phenom II when you consider how inconsistent its game performance apparently is.
http://www.hardocp.com/article/2011/11/03/amd_fx8150_multigpu_gameplay_performance_review/2

The unstable frame rate looks really poor.
 
Back
Top