According to "Opening the Xbox", Intel had its own console in the planning stages during the Xbox development (unbeknownst to each other). Intel and MS basically approached each other, MS to secure Intel hardware, and Intel to secure MS software. As the story goes, once Intel suits found out about the Xbox's existence, their system basically died a cold death. The Intel box was part of their consumer division (the guys who made the microscopes and such), so I don't think they were nearly as ambitious as MS in this regard.
I'm not too surprised... Back before Timna was canned, Intel used pitch a lot of noise about set-top boxes as well.
You cant compare PSX with Xbox, times have changed. It is impressive how MS can still carved out a market for themselves with the Playstation dominance.
Indeed times have changed. While the PSX had the benefit of Sega goofing in it's strongest market (North America), it still faced a good 2 years of neck to neck competition in it's home market. You could also say that the PSX benefitted from Nintendo's tomfoolery with the N64, but the BigN did offer a more graphically capable machine, and did still sell quite well even if it wasn't the dominant platform...
One can also draw similar comparison's with Xbox. While Sony did/does sit in a more unified and powerful position than the PSX competitors faced, one cannot deny the benefit of one of the most innovative (both in software and hardware) platform providers (SEGA) in not only bowing out, but also going on to provide software support for Xbox. Nintendo is no longer the big bully it once was, and has seemingly been content to profit from it's position while owning the handheld market. One could also argue that the success of the PSX helped bring videogaming even more, expanding the market providing a point of entry for Xbox...
MS Xbox has already brought gaming audio to the next level, that with the built in HD and HDTV support, Xbox has done more than PS2 and Cube.
I'd like to know how Microsoft has brought gaming audio to the next level? Perhaps in the aspect of underutilized hardware (although the Saturn and Dreamcast offer pretty stiff competition in that aspect)? Pushing spacial audio on a console was already accomplished last generation, and in the end with the current gen of hardware, whether you use DICE (Xbox), DTS Interactive (PS2), basic Dolby Surround (PSX, DC) or DPLII (GCN, PS2), it all amounts to basically a transport mechanism of which the console is only half of the hardware equation.
More importantly, software has been a more determining factor. Off the top of my head the amalgamation of music and 'Simon Says' by Konami in the Beamani series, Samba de Amigo, Rhapsody (musical RPG), and innovative audio based titles like Vib Ribbon, Rez, and Frequency (none of which are present on Xbox), have done far more for 'taking gaming audio to the *next* level' than one could argue Xbox has done so far...
As far as high definition support goes, none the current machines have really demonstrated anything significant in that regard (and no 480p is not 'hi-def'). I mean even the Saturn had Bomberman in a beautiful 704x488 progressive mode (assuming you had a Hi-Vision or NTSC-J monitor to support it). Xbox certainly has the most potential to do so, but really exploiting high definition TV means more than just higher resolution output. In order to really take advantage of HDTV, you also need your art assets to exploit the wider gamma and color fidelity, and as static textures represent one of the non-resolution independent, pre-calculated aspects of rendering, obviously means textures will have to grow to accommodate HDTV's resolution increase. Of course to really do all this, you also need higher HDTV market penetration than what currently exists...
As far as the hard drive goes, Sony did release theirs first, however Microsoft makes more extensive use of theirs so I'll give you that one. FFX and XI are the only titles I know that use it (and it's only been available Japan anyways). Of course you could factor in the Linux kit, dunno how you wanna rate that...
Live looks to kick start online console gaming, something which Sony and Nintendo can only hope to do with the PS3 and Cube2.
Live is indeed rather nice, however it's not without it's own drawbacks where Sony's approach is more accommodating...
Just want to reiterate what Simon F. said in the 3D forum. All of Nvidia's and ATI's GPUs were/are designed for PCs (except for Flipper in GCN), therefore they have to make a profit from the hardware itself right from the start. SONY's GS, EE, etc. however enjoyed subsidies from game sales therefore they can afford to make hardware that wasn't profitable at launch.
Well seeing as neither Nvidia's or ATi's graphics components are PC components, I'm not sure that's a complete argument. Nvidia does benefit in general by having other immediate markets that can (and did, or you could say it's console contract benefitted from other markets) inherit the technology during the console contract thus amortizing development costs to other markets. ATi's part is even less beneficial as it has no direct relation to any of it's PC or set-top components. In the end you've got ATi and Nvidia designing a part as part of a contract (with a relatively rich patron), in which the customer (who is relatively unique meaning no other real competitors for said part) is a system integrator that is pretty much stuck with whatever cost you the chip designer set to cover your costs. In the case of Nintendo, their desire to make a profitable machine means the ATi part via NEC is sold at relative cost. In the Nvidia case, you've got Microsoft selling an expensive system at loss. Microsoft being the beneficiary of software subsidies, has to absorb the cost of parts as the component vendors will not, thus Nvidia sells the part at above cost.
In other words SONY could push .25 fab tech to the limits because cost would be offset by software sales. Nvidia or ATi could've pushed the limits of .25u fab tech also, but then they would have to sell their chips at astronomical prices because they didn't have software sales to offset a loss.
You're partially right. Not so much because of software subsidy directly, but because the entire process was in-house. In the end even if ATi or Nvidia did benefit directly from the software subsidies, they would still be at the mercy of the capabilities of their foundries (NEC and TSMC respectively), and as we've seen with the NV30, an outside foundry can introduce issues not foreseen and/or make planning contingencies more difficult.
In the case of Sony, the chip design, foundry, and any additional related capital (e.g. software) is all in-house (or with close partner Toshiba). Building fab space of course isn't exactly cheap either, and doing so probably something Nvidia isn't looking at getting into. Of course Sony also has other chip business outside of the PS2 with memory, DSPs, microcontrollers, mixed-signal (DACs, ADCs, CCDs, etc.), so even if the PS2 had flopped they'd still have a useful investment in fab space (I think they something like 7-9 fabs now). While Sony did push .25µm rather hard, it was very costly to them because it was .18µm that they had planned on mass production of the GS and the Nagasaki fab's (the initial one that is) spin up problems led to the small, low-capacity, initial .25µm fab to not only fill orders for Japan, but also for the US launch (hence the PS2 shortage). This is also means they have significant facilities to toy around with ultra-high-density ICs (e.g. I-32) to feel out how mature their processes are at each significant design rule reduction to see how well they handle large, complex designs and address any issues before it need go into production...
I would see this as viable only if MS wanted to follow Sony's example of creating a very difficult to work with platform. Given Intel's failure in producing competitive products in the consumer 3D market(even after they purchased a company explicitly for that reason) they would almost assuredly rely on CPU power to fill in for dedicated hardware.
Why? Is it inconceivable that Intel can design an SoC that's not "difficult to work with?" I mean their PXA and IOP SoCs are pretty slick and I haven't heard any complaints about PocketPC dev'ing being difficult on any of the PXA processors (granted they're ARM)... Other than them not wanting to do so, I can't see them being incapable. Hey I'm probably the farthest thing from an x86/IA-32 fan (or IA-64 for that matter), but I'm willing to give Intel the benefit of the doubt. Do you think they're a bunch of incompetent buffoons? The P6, Athlon, Netburst, and Banias are all x86 compatible yet they're all different microarchitectures. I personally was thinking of an Athlon like execution core (symmetry wise), shallower and wider (for lower execution latency) perhaps a 2 simple + 2 complex pipeline setup (both in integer and floating-point). Or simply just leverage the micro arch of Banias on a more complex SoC (CPU core, on-chip bus or network, USB or some other I/O structure, one or more DSPs (and caches or scratchpads) for audio computation), with the GPU as the second IC either as a discrete package or a second die on the same package. Or it could be something more like the GCN setup where the GPU is wrapped up with N and S-bridge functionality in one chip with the second simply being one of their COTS parts...
As for Intel's 'failure' in the graphics chip market, can you really say they've given the high-end a really serious attempt? Hell even Apple's purchased a graphics company. Considering Intel's never ventured beyond the scope of what the i740 entailed, yet they're one of the largest graphics core vendors I'd say they're doing alright, or have your forgotten about their core-logic business? Sure it's not as glamorous as the high-end chips but it's obviously an important sector otherwise ATi and Nvidia wouldn't be so antsy to get involved.
Another aspect is that they would be turning their back on DirectX which I don't see happening. I suppose it is within the realm of possibilities, but the power and ease of development in terms of the XBox are something that I haven't seen any argue MS got wrong.
Now why would they abandon DirectX? You *do* realize the whole point of DirectX is to provide a uniform interface for for a rather diverse range of hardware? As far as ease and power of development, you haven't seen anybody argue that Nintendo got it all wrong with the GCN either. Is it an automatic assumption that Nintendo is going to go with a PowerPC/ATi/Macronix solution again?
This is the general impression that I have gained from observing AI in action(I've never even see an AI script before so I have absolutely no idea what they even entail). The AI in Half-Life, just to use a very outdated example, still seems to be considerably better then what we see in most new games. More intelligent and simply more real then what we are seeing today.
Well to be realistic, not all games can be AI masterpieces. Nor are people going to want nothing but AI masterpieces (in the strategy sense). I mean there's some pretty damn phenomenal AI systems in some of today's chess software (and I'm talking at the consumer level), but you don't see people knocking down doors to buy up chess games (hell the variant of GNU chess that comes with my mac is more than adequate for me).
It is fairly obvious that with the amount of graphics power that the next gen will have it will almost certainly take an educated pair of eyes to spot the differences in the visuals, I am currently under the impression that the same scenario is going to play out on the AI/Physics side of the coin. We are going to reach a point, moreso then already, where coding skills determine how well AI/Physics work on a given platform and even taking that into consideration you will still need a trained pair of eyes to spot the difference between the AI physics between the two likely platforms(not many people will notice if a Vette is pulling an extra .05Gs over what it should on an off camber decreasing radius turn as an example).
Well in comparison graphics have been a relatively easier problem to solve. We've had the tools (languages, APIs, mathematical and programming models) to do fairly convincing jobs, basically waiting for design and manufacturing to provide us with hardware that allows us to do in real-time what we've done off-line with our existing tools.
AI and physics propose a somewhat more complex computational problems. You could even argue that graphics (and audio for that matter) are simply just components of physics (modeling the behavior of light and sound). Yet in terms of what we do with both is a mere pittance compared to graphics. There's so much more to be done in those fields compared to what we do today.
I really wonder if Sony will even match DX9 features and IQ in the PS3 rasterizer (Graphics Synth 3, right?) That's probably the most they would be able to get in to it,
Well for one thing, mind pointing out the specification in DX9 for "Image Quality?" Secondly, why aim for DX9? Why not OpenGL 2.0 or how about a real-time RIB processor. With graphical hardware migrating towards a more general programming model, bullet point 'features' are becoming an anachronism. After all, the VUs for the most part exceed the capabilities of the DX9 VS...
Not multitextured
Considering reasonably complex fragment programs can render that pointless, nor does it benefit off-screen draw-performance, I'm not sure if that's as important as it's other more beneficial features. I guess for tri-linear performance it matters...
but yet still struggles to output 480p
I'd like to know how setting a few registers amounts to "struggling?"
It would be interesting to see what some of the bigger studios could do with the Xbox if it had the budget and support that the PS2 currently has.
Well EA, Sega, Konami, and Namco are about as big as they come...
I guess that's enough hot air for today...