Saem said:
You did NOT comprehend what I'm saying, stop trying to claim that you are.
Yes, I *did* understand, as far as you actually bothered to explain your theories anyway. As you refused - and still refuse - to actually apply it to a practical example, I did that work for you. Sorry if I tangented off in a different direction than you, but really, you gave me no other choice.
Now next step is up to you. Do your homework buddy, I won't accept any more sneering remarks from you that I did not understand unless you actually SHOW what it is you believe I did not understand!
SHOW
EXACTLY how you think I am not understanding you. Unless you bother to do that, this discussion will go nowhere.
What you fail to understand is the level of abstraction I'm talking about. This doesn't allow you to even begin to comprehend the trend I'm talking about.
And still you fail to explain yourself.
It's easy to blow people off with big words like 'level of abstraction' and telling people to 'do the neccessary research' (which just shows you're either bad at explaining what you really mean or you're plain lazy if you expect people to have to RESEARCH to find the point of your posts), but just as a reminder here buddy, the reason we have discussion boards is actually to DISCUSS things.
Sitting there with your nose up in the air saying in a snotty manner people don't comprehend is neither polite nor particulary constructive. It doesn't further any kind of discussion, it's just annoying and irritating. I kindly suggest you try harder to explain yourself in the future...
Additionally, the sample size is rather large over a large period of time which has had large shifts in computing philosophies encompassed within it. This isn't MY observation pulled out of my ass, this is something which many people in the know (especially engineers in the field) will either say so themselves or agree with it.
Your trends not pulled out of your ass (or so you say) notwithstanding, what I want to know is how you intend to actually APPLY this trend to anything SPECIFIC. It's all well and good hearing you say brainy microprocessors tend to miss their target; even if we take you on face value and assume this to be the truth, then SO WHAT?! Where does this take us in the context of our discussion?!
Unless you're talking about a specific CPU, your trend is meaningless, pointless AND irrelevant. It's just words without value unless you actually go a step further and start being specific. Of course, unless you're talking about an existing CPU, you'd just be speculating!
Again, taking Cell as an example, neither you nor anyone else here knows if it will end up being a "brainy" architecture or not, and even if it IS, none of us know wether it is in danger of missing its target because of it being "brainy", whatever that target may be! As far as I know, there hasn't even been a tapeout of the completed chip yet, much less first silicon, so how could anyone discuss any possible problems with its design with any reasonable level of confidence? (This rules out Deadmeat's speculation, of course. *snicker*)
Cell will also not be required to scale up in clock speed as time passes, unlike a PC (or almost any other) CPU would be expected to.
WHAT?!?!?!?! That's crazy. Intel and AMD don't blow so much money on design just to get the CPU out at a speed and then go, "Yay gravy!", when it scales.
Difference is of course, Cell will be used in fixed hardware. It will run at X MHz when it launches, and it will STILL run at X MHz when it is taken out of production a number of years later. Heck, the Cell architecture even incorporates a mechanism to insert dummy instructions to ensure code runs at the 'designed' speed when executed on faster hardware than the software was programmed for...
Whatever device you buy that contains a Cell processor, you will not be able to buy a faster replacement chip and plop in there like you can with a PC; this will be especially true for PS3. Hence, no need for the chip to scale up in MHz as time passes. Yield better and suck less power, sure, but wringing more speed out of the chips would be pointless. After all, PS2 EE still runs at just under 300MHz despite having been in production for years now and having gone through like five silicon revisions, getting smaller and cooler each successive time...
You're obviously not aware of how things are carried out in the PC MPU space
Except, Cell - which is the chip I was talking about - isn't aimed at the PC space, something I clearly stated. Try to keep up, will you? I won't have to repeat myself as much that way.
I can't believe one of the people in the know didn't tear you a new one over that -- Vince
.
Ha ha. Well, maybe THEY actually read my post properly...
Additionally, your PIII vs Athlon example is way off. The PIII was the last breath of the PPro architecture. People were suprised it scaled as it did.
Maybe they shouldn't, considering the engineering resources Intel sunk into re-engineering the chip and tweaking it to get rid of critical speed paths etc. PPro and derivates went from - I think - 133 to 1333MHz, unless you want to count the mobile bananas core also (which borrows elements of netburst and is undoubtedly tweaked on the inside to scale better), then it's like 1600-ish MHz at the moment. Athlon started at 500MHz and is now at 2400 or such, right? Not that much of a difference in absolute numbers (and to AMDs advantage actually, lol), especially considering AMDs considerably smaller level of resources...
I think my example works just fine!
The Athlon didn't have a peer because it unlike the PIII hand not shown it's legs in the 0.18u Cu process.
Sorry, I don't quite follow you. I believe AMD reached .18 before Intel, and unlike Intel, it used copper interconnects long before its rival switched over. Saying it didn't show its legs seems strange to me, since that's exactly what it seems to be doing!
In reality, it was the P3 not showing its legs for a while, but of course Intel decided to dump the P3 for the (initially) far worse performing P4, so that battle became rather moot really...
[/quote]all this means is that CELL is likely going to have trouble meeting its target in typical workloads.[/quote]
...Which is just your baseless speculation, nothing more.
Your words are nothing but FUD at this point in time.
*G*