Where has this dual SH4 DC come from? Sega wanted a cheap and easy to develop for system, why would they have been trying to cram more CPU's in there?
And given that the DC needed it's CPU for T&L, because its GPU couldn't do T&L, why would MS be trying to convince Sega to do that on the graphics chip instead?
This came as rumor report (that stated it was dropped) from a 1997 or early 98 Next Generation magazine citing as you just pointed out that the double edged sword was a second SH4 could take the T&L duties while the other would be free for other tasks theoretically increasing performance, following up on the 4 some years of dual SH2 programming experience Sega's in house dev teams had but the downside would be certain 3rd parties complaining, and Microsoft being "hired" to make the console friendlier to developers back in an age where dual cpu setups was seen as evil or demonized and those 3rd parties that were really coming from the Microsoft OS PC Platform its easy to see where the influence was really coming from.
Of course later with Xbox 1 Microsoft marketed the idea that developers would gain freedom to do whatever they wanted by having the GPU do only graphics and CPU (singular) free to other tasks.
Evil Microsoft and their evil OpenGL HAL, [strike]allowing[/strike] forcing people to do things on GPU that would be better done 100 times more slowly on the CPU. Yes, if only MS wasn't forcing people to use Glide to do things faster off-CPU!
Call it or ridicule it how you want, would it be fair if I directly called you a Microsoft evangelist?
The PC's default Operating system as it currently stands for the last 15 years is dominated by Microsoft proprietary software APIs, sure they get legal complaints, but they have tried to eliminate any type of innovation that would not come from Microsoft.
Back in 1995, then videologic's first powerVR chipset had its own api, Nvidia's Nv1 had its own, and so on, Microsoft released the rule of compliance to "direct3d X" to effectively terminate. 3dFX managed to slip by with glide, getting distracted with Sega (similar to nvidia enginneers not being able to focus on nv30 since they were finishing nv2A) getting too full of themselves (yet they had alot of dev support quite the splendid threat to MS) and compliance to Dx specs as well as competitors who did not have independent blessingns and chose to focus on compliance to Dx specs...and then you get to opengl which is really not a microsoft proprietary trademark and instead encourages choice.
have we seen Microsoft trying to encourage OS competition or is it really what it is, monopoly.
The thought of a 2008 console using g71 would give me nightmares too.
I said that in the context of Nintendo using it and since 55nm process was the standard and possibility of such a G71, it would have been ideal, even if it was clocked at 430Mhz and crippled to a 64bit bus because lower power draw, cooler running would have been priorities yet they would have had far better check list features.
Just saying that even if GPUs started having GPGPU functions quite a while ago, ---boldedas there was no forward looking DirectX release to take advantage of such a setup----bolded, with DirectX9 remaining the lowest common denominator for a long, long time, and Vista's failure extending that even further.
You see right now that developers who are targeting 11 get opportunities for performance gains that are similar to how you would program for Cell + GPU. You can see this in D.I.C.E.'s later presentations for example. Plenty of discussion there on what I'm getting at.
I'm not blaming anyone here necessarily - it's just that PC development in general had a big impact on how the PS3s architecture was (under)used by third parties. It wasn't meant as serious as you are taking it now, mind, I know very well that first party studios had trouble adjusting as well. I just wanted to point out that the consoles and particularly the PS3 was far ahead of its time in some respects that are only now with DirectX11 finally possible on PC (but will probably still take a while before becoming as well supported as Directx9). And with PC and 360 having become such a big item for multi-platform development, and all the troubles that Directx9 has had in progressing towards Directx11, the PS3's 'outlandish' design has stayed outlandish for far longer than it could have been.
Microsoft controls and decides what they feel like with direct3dX API compliance, directX is not really hardware, its an API that is hardware agnostic so even if ATI has truform in R200 or tesselation decendant in the C1 and 2900HD Radeon it is NEVER going to be used by any developer unless they break Microsoft's API or use a competing API or go to custom low level tools.
I'll pose another "what if" scenario, we all know Nintendo contracted ArtX, within a short time ArtX was bought out by ATI and we know R200 and its truform was not perfect but ATI dropped R250 in favor of focusing on ArtX's R300. Well, contract somehow gets renegotiated, console delayed (I know highly unlikely) but ATI insists on convincing Nintendo would have an edge with R250 in a delayed gamecube (that would also get a faster cpu in the process and more ram but most likely same mini disc) now nintendo would have a GPU clocked at 300Mhz depending on 150nm or 130nm but they would have the direct competitor to Nv2A only they also have Truform.
Its obvious a custom API is going to have to be used and one that is not Microsoft trademarked, do you really think xbox1 would have had bragging rights for 4 years of tech superiority in shipping games?
Getting back more currently, don't you see the strategy of forcing a next gen in 2005in Microsoft's part? 90nm C1 and Nv47 ran very hot and sucked mad electro juice as per tech limitations, G80 was out of the question unless a significant die shrink would be done and my estimation is 55nm and using G92b instead, you wanna use GT200b @55nm? no you cannot you have to go to 40nm and maybe lower.
Its true that wow Dx11 now supports all of these advanced features but was it in the interest of Microsoft that G80, G92b, GT200b or fermi land in Sony's lap without some realistic thermal and power draw and TIME cost?
Next generation xbox and playstation are going to have GPU feature differences because just like how MS was able to exploit having a unifiedshader pipeline they are not going to hand over the door to Sony but placing too much strength on directX or Dx11 alone is kind of ridiculous, its still just an api not a magic mushroom.