MS wants XBox2 out before PS3?

According to "Opening the Xbox", Intel had its own console in the planning stages during the Xbox development (unbeknownst to each other). Intel and MS basically approached each other, MS to secure Intel hardware, and Intel to secure MS software. As the story goes, once Intel suits found out about the Xbox's existence, their system basically died a cold death. The Intel box was part of their consumer division (the guys who made the microscopes and such), so I don't think they were nearly as ambitious as MS in this regard.

I'm not too surprised... Back before Timna was canned, Intel used pitch a lot of noise about set-top boxes as well.

You cant compare PSX with Xbox, times have changed. It is impressive how MS can still carved out a market for themselves with the Playstation dominance.

Indeed times have changed. While the PSX had the benefit of Sega goofing in it's strongest market (North America), it still faced a good 2 years of neck to neck competition in it's home market. You could also say that the PSX benefitted from Nintendo's tomfoolery with the N64, but the BigN did offer a more graphically capable machine, and did still sell quite well even if it wasn't the dominant platform...

One can also draw similar comparison's with Xbox. While Sony did/does sit in a more unified and powerful position than the PSX competitors faced, one cannot deny the benefit of one of the most innovative (both in software and hardware) platform providers (SEGA) in not only bowing out, but also going on to provide software support for Xbox. Nintendo is no longer the big bully it once was, and has seemingly been content to profit from it's position while owning the handheld market. One could also argue that the success of the PSX helped bring videogaming even more, expanding the market providing a point of entry for Xbox...

MS Xbox has already brought gaming audio to the next level, that with the built in HD and HDTV support, Xbox has done more than PS2 and Cube.

I'd like to know how Microsoft has brought gaming audio to the next level? Perhaps in the aspect of underutilized hardware (although the Saturn and Dreamcast offer pretty stiff competition in that aspect)? Pushing spacial audio on a console was already accomplished last generation, and in the end with the current gen of hardware, whether you use DICE (Xbox), DTS Interactive (PS2), basic Dolby Surround (PSX, DC) or DPLII (GCN, PS2), it all amounts to basically a transport mechanism of which the console is only half of the hardware equation.

More importantly, software has been a more determining factor. Off the top of my head the amalgamation of music and 'Simon Says' by Konami in the Beamani series, Samba de Amigo, Rhapsody (musical RPG), and innovative audio based titles like Vib Ribbon, Rez, and Frequency (none of which are present on Xbox), have done far more for 'taking gaming audio to the *next* level' than one could argue Xbox has done so far...

As far as high definition support goes, none the current machines have really demonstrated anything significant in that regard (and no 480p is not 'hi-def'). I mean even the Saturn had Bomberman in a beautiful 704x488 progressive mode (assuming you had a Hi-Vision or NTSC-J monitor to support it). Xbox certainly has the most potential to do so, but really exploiting high definition TV means more than just higher resolution output. In order to really take advantage of HDTV, you also need your art assets to exploit the wider gamma and color fidelity, and as static textures represent one of the non-resolution independent, pre-calculated aspects of rendering, obviously means textures will have to grow to accommodate HDTV's resolution increase. Of course to really do all this, you also need higher HDTV market penetration than what currently exists...

As far as the hard drive goes, Sony did release theirs first, however Microsoft makes more extensive use of theirs so I'll give you that one. FFX and XI are the only titles I know that use it (and it's only been available Japan anyways). Of course you could factor in the Linux kit, dunno how you wanna rate that...

Live looks to kick start online console gaming, something which Sony and Nintendo can only hope to do with the PS3 and Cube2.

Live is indeed rather nice, however it's not without it's own drawbacks where Sony's approach is more accommodating...

Just want to reiterate what Simon F. said in the 3D forum. All of Nvidia's and ATI's GPUs were/are designed for PCs (except for Flipper in GCN), therefore they have to make a profit from the hardware itself right from the start. SONY's GS, EE, etc. however enjoyed subsidies from game sales therefore they can afford to make hardware that wasn't profitable at launch.

Well seeing as neither Nvidia's or ATi's graphics components are PC components, I'm not sure that's a complete argument. Nvidia does benefit in general by having other immediate markets that can (and did, or you could say it's console contract benefitted from other markets) inherit the technology during the console contract thus amortizing development costs to other markets. ATi's part is even less beneficial as it has no direct relation to any of it's PC or set-top components. In the end you've got ATi and Nvidia designing a part as part of a contract (with a relatively rich patron), in which the customer (who is relatively unique meaning no other real competitors for said part) is a system integrator that is pretty much stuck with whatever cost you the chip designer set to cover your costs. In the case of Nintendo, their desire to make a profitable machine means the ATi part via NEC is sold at relative cost. In the Nvidia case, you've got Microsoft selling an expensive system at loss. Microsoft being the beneficiary of software subsidies, has to absorb the cost of parts as the component vendors will not, thus Nvidia sells the part at above cost.

In other words SONY could push .25 fab tech to the limits because cost would be offset by software sales. Nvidia or ATi could've pushed the limits of .25u fab tech also, but then they would have to sell their chips at astronomical prices because they didn't have software sales to offset a loss.

You're partially right. Not so much because of software subsidy directly, but because the entire process was in-house. In the end even if ATi or Nvidia did benefit directly from the software subsidies, they would still be at the mercy of the capabilities of their foundries (NEC and TSMC respectively), and as we've seen with the NV30, an outside foundry can introduce issues not foreseen and/or make planning contingencies more difficult.

In the case of Sony, the chip design, foundry, and any additional related capital (e.g. software) is all in-house (or with close partner Toshiba). Building fab space of course isn't exactly cheap either, and doing so probably something Nvidia isn't looking at getting into. Of course Sony also has other chip business outside of the PS2 with memory, DSPs, microcontrollers, mixed-signal (DACs, ADCs, CCDs, etc.), so even if the PS2 had flopped they'd still have a useful investment in fab space (I think they something like 7-9 fabs now). While Sony did push .25µm rather hard, it was very costly to them because it was .18µm that they had planned on mass production of the GS and the Nagasaki fab's (the initial one that is) spin up problems led to the small, low-capacity, initial .25µm fab to not only fill orders for Japan, but also for the US launch (hence the PS2 shortage). This is also means they have significant facilities to toy around with ultra-high-density ICs (e.g. I-32) to feel out how mature their processes are at each significant design rule reduction to see how well they handle large, complex designs and address any issues before it need go into production...

I would see this as viable only if MS wanted to follow Sony's example of creating a very difficult to work with platform. Given Intel's failure in producing competitive products in the consumer 3D market(even after they purchased a company explicitly for that reason) they would almost assuredly rely on CPU power to fill in for dedicated hardware.

Why? Is it inconceivable that Intel can design an SoC that's not "difficult to work with?" I mean their PXA and IOP SoCs are pretty slick and I haven't heard any complaints about PocketPC dev'ing being difficult on any of the PXA processors (granted they're ARM)... Other than them not wanting to do so, I can't see them being incapable. Hey I'm probably the farthest thing from an x86/IA-32 fan (or IA-64 for that matter), but I'm willing to give Intel the benefit of the doubt. Do you think they're a bunch of incompetent buffoons? The P6, Athlon, Netburst, and Banias are all x86 compatible yet they're all different microarchitectures. I personally was thinking of an Athlon like execution core (symmetry wise), shallower and wider (for lower execution latency) perhaps a 2 simple + 2 complex pipeline setup (both in integer and floating-point). Or simply just leverage the micro arch of Banias on a more complex SoC (CPU core, on-chip bus or network, USB or some other I/O structure, one or more DSPs (and caches or scratchpads) for audio computation), with the GPU as the second IC either as a discrete package or a second die on the same package. Or it could be something more like the GCN setup where the GPU is wrapped up with N and S-bridge functionality in one chip with the second simply being one of their COTS parts...

As for Intel's 'failure' in the graphics chip market, can you really say they've given the high-end a really serious attempt? Hell even Apple's purchased a graphics company. Considering Intel's never ventured beyond the scope of what the i740 entailed, yet they're one of the largest graphics core vendors I'd say they're doing alright, or have your forgotten about their core-logic business? Sure it's not as glamorous as the high-end chips but it's obviously an important sector otherwise ATi and Nvidia wouldn't be so antsy to get involved.

Another aspect is that they would be turning their back on DirectX which I don't see happening. I suppose it is within the realm of possibilities, but the power and ease of development in terms of the XBox are something that I haven't seen any argue MS got wrong.

Now why would they abandon DirectX? You *do* realize the whole point of DirectX is to provide a uniform interface for for a rather diverse range of hardware? As far as ease and power of development, you haven't seen anybody argue that Nintendo got it all wrong with the GCN either. Is it an automatic assumption that Nintendo is going to go with a PowerPC/ATi/Macronix solution again?

This is the general impression that I have gained from observing AI in action(I've never even see an AI script before so I have absolutely no idea what they even entail). The AI in Half-Life, just to use a very outdated example, still seems to be considerably better then what we see in most new games. More intelligent and simply more real then what we are seeing today.

Well to be realistic, not all games can be AI masterpieces. Nor are people going to want nothing but AI masterpieces (in the strategy sense). I mean there's some pretty damn phenomenal AI systems in some of today's chess software (and I'm talking at the consumer level), but you don't see people knocking down doors to buy up chess games (hell the variant of GNU chess that comes with my mac is more than adequate for me).

It is fairly obvious that with the amount of graphics power that the next gen will have it will almost certainly take an educated pair of eyes to spot the differences in the visuals, I am currently under the impression that the same scenario is going to play out on the AI/Physics side of the coin. We are going to reach a point, moreso then already, where coding skills determine how well AI/Physics work on a given platform and even taking that into consideration you will still need a trained pair of eyes to spot the difference between the AI physics between the two likely platforms(not many people will notice if a Vette is pulling an extra .05Gs over what it should on an off camber decreasing radius turn as an example).

Well in comparison graphics have been a relatively easier problem to solve. We've had the tools (languages, APIs, mathematical and programming models) to do fairly convincing jobs, basically waiting for design and manufacturing to provide us with hardware that allows us to do in real-time what we've done off-line with our existing tools.

AI and physics propose a somewhat more complex computational problems. You could even argue that graphics (and audio for that matter) are simply just components of physics (modeling the behavior of light and sound). Yet in terms of what we do with both is a mere pittance compared to graphics. There's so much more to be done in those fields compared to what we do today.

I really wonder if Sony will even match DX9 features and IQ in the PS3 rasterizer (Graphics Synth 3, right?) That's probably the most they would be able to get in to it,

Well for one thing, mind pointing out the specification in DX9 for "Image Quality?" Secondly, why aim for DX9? Why not OpenGL 2.0 or how about a real-time RIB processor. With graphical hardware migrating towards a more general programming model, bullet point 'features' are becoming an anachronism. After all, the VUs for the most part exceed the capabilities of the DX9 VS...

Not multitextured

Considering reasonably complex fragment programs can render that pointless, nor does it benefit off-screen draw-performance, I'm not sure if that's as important as it's other more beneficial features. I guess for tri-linear performance it matters...

but yet still struggles to output 480p

I'd like to know how setting a few registers amounts to "struggling?" :-?

It would be interesting to see what some of the bigger studios could do with the Xbox if it had the budget and support that the PS2 currently has.

Well EA, Sega, Konami, and Namco are about as big as they come... ;)


I guess that's enough hot air for today... :oops:
 
The MCP specs sure look like a very interesting hardware.

Give it time, and should games like Vib Ribbon Rez appear on Xbox, you can be sure that the sonic experience from such games, on the Xbox will be a step up compare to any other consoles today. :D

Some Xbox games can run at 720p and more 720p games are coming. No big deal to the PC gamers, but amazing and a stepup nonetheless, considering the limited Xbox hardware. :eek:

Sony will likely turn to the XBLive approach once PS3 is out, with built in HDD and NA. Their current free-to-play attitude is more of a temp measure and a coverup, due to the PS2 hardware limitation(ie no built-in online parts).

Oh and Sega, EA, Konami, Namco arent really pushing the Xbox hardware enough, with all those sloppy ports.
Well, maybe Sega with PDO(which somewhat highlights the poW@R of Xbox).

Look to Bungie for Halo 2 people. :oops:
 
I'd like to know how setting a few registers amounts to "struggling?"

Didn't the PS2s trouble with 480p stem from the need to preserve resolution bandwidth to put out a pretty picture for the vast majority of systems out there (480i)?

It was said a while back that SCE R&D found a way around this, and was going to start distributing the specs to developers soon, so maybe the end result was just a register amounts? :p

Well EA, Sega, Konami, and Namco are about as big as they come...

Well aside from Sega's fetish for reusing DC assets, none of the rest have put out a truly big name Xbox exclusive, whereby a title was designed ground up for the system. DTR and MGS2:S are not exactly what I'd call titles that were engineered at birth for the Xbox... :p

Panzer Dragoon Orta looks like a good start from Sega...

What I meant was, if big studios like Square and Konami would have spent the 2-3 years and millions of dollars on Xbox titles, rather than PS2. Unrealistic, but it'd be interesting to see what the end result would be.

zurich
 
Didn't the PS2s trouble with 480p stem from the need to preserve resolution bandwidth to put out a pretty picture for the vast majority of systems out there (480i)?
No.

It was said a while back that SCE R&D found a way around this and was going to start distributing the specs to developers soon, so maybe the end result was just a register amounts?
No again 8)

Truth is that parts of CRT registers were, and still remain officially undocumented, and stuff pertaining switching scanning modes is part of that.
If you were brave, and had a few displays that you didn't mind destroying, you could likely reverse engineer the proper settings required to display various SVGA/HDTV resolutions.
But in the end it would do you little good because even now that the high resolution support has been officially included in basic SCE libraries, you still have to request Sony's approval to include any kind of progressive scan support in your title.
Trying to do so before they made the "P-scan" support official would probably never pass the Q&A.
 
Well seeing as neither Nvidia's or ATi's graphics components are PC components, I'm not sure that's a complete argument. Nvidia does benefit in general by having other immediate markets that can (and did, or you could say it's console contract benefitted from other markets) inherit the technology during the console contract thus amortizing development costs to other markets.

Isn't the GPU in Xbox basically a slightly modified GF3 though? In other words the design was initially targeted for PCs therefore it had to be profitable in the PC space at launch. I do agree that fabless companies like Nvidia have more difficulty pushing fab processes though. Better safe than sorry I guess.
 
archie4oz said:
As far as the hard drive goes, Sony did release theirs first, however Microsoft makes more extensive use of theirs so I'll give you that one. FFX and XI are the only titles I know that use it (and it's only been available Japan anyways). Of course you could factor in the Linux kit, dunno how you wanna rate that...

There are some more titles that use the hard disk in addition to FF-X and FF-XI, e.g. Wild Arms Advanced, Virtual Fighter 4, and at least one or two RPGs right off my head.

The BB Navigator software (you can download game demo and view new games information on the game channels) also need the hard disk and the BB Unit.
 
archie4oz said:
Pushing spacial audio on a console was already accomplished last generation, and in the end with the current gen of hardware, whether you use DICE (Xbox), DTS Interactive (PS2), basic Dolby Surround (PSX, DC) or DPLII (GCN, PS2), it all amounts to basically a transport mechanism of which the console is only half of the hardware equation.

Of all the consoles, my understanding is only the xbox gives you this basically for free.

Given a mono audio source sample, you can tell the MCPX where in 3D space you want the emitter to be, and the MCPX will figure out all the HRTFs and cross-fading, and generate the correct final mixdown to be delivered in DD5.1 to the output device, at virtually zero cost -- my understanding is enabling the final DD5.1 mixdown is as easy as turning it on.

If you want to get more sophisticated, you cast some rays into your game's world database, assign a velocity, direction, and a propagation cone, and tell the MCPX to apply the right doppler, echo, reverb, filters, effects, etc. Maybe calculate first and second order reflections and setup the echo parameters. Get as sophisticated as you like.

Again, basically free in terms of CPU usage once you've figured out what parameters you want to use. And the 5.1 mix is generated on the fly by the MCPX for free -- no CPU intervention.

Anyway my impression is that I don't think any of the other consoles have audio hardware that's quite at this level of sophistication, but I could be totally wrong.
 
Xenosaga is another game that I believe supports the hard drive.

Question- is the HDD code removed from Final Fantasy X's North American version? I'm just curious if anyone has tried it...
 
Vince-

Actually their not, and I'll show you. I see a lack of thinking and imagination on your side, coupled with resentment of anything thats not plainly in your idea of the future.

Analyzing the potential market for a product is something I spend a good deal of time doing at my 9-5, there are a slew of marketing issues which I'll get into in a bit but for now I'll cover your analogies in more depth.

It's realitivly centralized processing can be several orders of magnitude more expensive, suck in more power, and be much more vast in scope (ie. Array) than your desktop can.

And how many users utilize a decent fraction of their current CPUs? You have three main groups of users who match the criteria, pro artists, engineers/scientists and gamers. For gamers it is a non issue, the latency would make games unplayable(you would be dealing with 1/2 second input latency at least). That is the, by far, largest segment of users who could utilize extra cycles removed from the idea right off. We can look at Quake3, a game that is three years old, for an example of a title that shows a significant improvment moving from SDRAM to DDR or RDRAM, the bandwith and latency of remote computing has no chance of working in the gaming market.

For pro artists you have 2D, 3D and video. 2D is nearly done in terms of drastic speed ups. With video your limiting factor the majority of the time is I/O, not CPU power(although for certain functions it is possible they could benefit by load balancing). For 3D artists test renders and final renders are the only things you would really need the extra power for, and both of those are starting to move over to the GPU. By the time IBM has anything remotely like their ideal vision of Grid working, test render will likely be a non factor with final rendering being the only real push there(and for that Grid would fit perfectly). Engineers and scientists can always use the power(although how many of them lack supercomputers is another subject entirely).

You are talking about supplying the overwhelming majority of people with something that they simply don't need. Try and think of computing intensive applications in the future that are going to come close to outpacing the advancement of consumer hardware. For Grid to work as a marketplace item they would need a killer app that isn't too time sensitive(as latency is going to be at least two orders of magnitude higher then on a PC).

It's hassle-free (Thank you God). Nomore worrying about WindowsXP not reformatting everything or the quircky bugs on your desktop that keep crashing and loosing your data. No more technical worries at all - This has been the progression of human societal evolution when it comes to specilization and education. We no longer need to worry or know how to farm, produce energy, get and make sure the water is clean - Their's no reason we need to fuck with the problems of a desktop PC.

I wish I had a copy of OS2 to send your way, IBM create a seamless utility for an OS :LOL:

Let's say that they started researching it now and could get it figured out in fifteen years(companies that have proven significantly better then IBM in the field have already spent longer then that). What makes you think that between Apple, MS and Linux one of them won't be able to do the same on a standard PC? Doing it on a PC is quite simple in comparison to some monster data warehouse/server. If we look at pain free devices that compromise the functionality of a PC, they have already been tried and failed(WebTV likely the most noteable- delivered everything it promised and it wasn't close to enough).

The point is, it (computing = utility) could become as seemlessly integrated into everything we do as the present utilities, to a point where we don't even know it's there.

We certainly don't need Grid/network computers, or anything like them, to get us there. Do you want to pay a monthly fee for something you don't have to now? Something that would reduce the abilities you already have? It won't sell.

First off I never said "electricity" for a reason, I said power generation. People used several forms of power generation like taming the Wind, Flowing water, et al to help with manufacturing. Not everyone is as limitied in vision as you.

I'll pull up your quote-

Or a few hundred years after that when some forward looking man said, in the future the home will have no generating source of power. Well, how insane is that?

The home is what I was replying to and it represents a major shift in your end of the argument. If you were stating that industry had uses for power generation that would have been something entirely different from homes. I live in central New England, their are two 19th century houses across the street from me(and thousands in the area where I live), I'm quite familiar with exactly what homes of that time era entailed and power generation was a non factor.

This may seem like nitpicking but it certainly isn't. Stating that Grid may have some uses in industry is worlds different then saying it has a viable place in the home.

Those are simply logical problems, nothing compared to the nightmare of trying to market this to consumers. Connecting online is the top reason most people have PCs which would seem to indicate that an appliance type device would be perfect for the market. This has already been tried and backed by tens of millions in marketing and it was built around an honest 'turn it on and go' product, WebTV. It was significantly cheaper then a PC and still failed to gain widespread acceptance. Why? It compromised functionality. Despite people's prime concern being access to the net the inability to handle other operations killed any chance WTV ever had in terms of marketplace acceptance. And, they didn't want you to pay an additional monthly fee.

How would you sell Grid to the average consumer? You have to assume that technologies are going to be developed and only applied to it for it to be close to viable(a hassle free computing experience as a general example). You think that IBM is going to come up with some new ideas that haven't been attempted before? Grid works on the principles of a render farm or distributed computing, IBM simply thinks(they certainly didn't come up with the idea) that people will pay them a fee for something they can already do.

Archie-

Why? Is it inconceivable that Intel can design an SoC that's not "difficult to work with?" I mean their PXA and IOP SoCs are pretty slick and I haven't heard any complaints about PocketPC dev'ing being difficult on any of the PXA processors (granted they're ARM)... Other than them not wanting to do so, I can't see them being incapable. Hey I'm probably the farthest thing from an x86/IA-32 fan (or IA-64 for that matter), but I'm willing to give Intel the benefit of the doubt. Do you think they're a bunch of incompetent buffoons?

When it comes to designing a 3D rasterizer? The i740 when first launched was supposed to be a high end part(complete with the price tag) and was supposed to set a new standard in 3D graphics. It failed miserably. How many companies in the world have proven that they can produce a feature complete 'GPU'? I can only think of a small handful; 3DLabs, SGI, ATi and nVidia.

Why did I state that an Intel platform would be more difficult to work with? Based on their history I don't see them coming up with a feature complete part capable of exploiting the HLSL in up to date(let alone ahead of the curve) DX revisions. They have failed to release a DX7 level part to date, I don't see them pulling out the engineers to accomplish such a task when they specialize in processor centric applications. If the design were to fall to Intel I see them almost assuredly following Sony's design theme(reliance on CPU power to fill in for GPU functionality). Even with their sole attempt to enter the high end consumer 3D market they relied on other platform technology to help them cover design issues(the i740 add in AGP boards had no on board texture memory... WTF were they thinking?).
 
Grid computing will become popular the day every person in America is comfortable using Linux, every home in the US has a T3 line, PC games run Global Illumination in realtime, and dotcom companies are more profitable than bricks-and-mortar.

Oh yeah, and did I mention the flying pigs, Hell freezing over, and Israel making peace with Palestine?

Grid computing is an impractical attempt to answer a problem that does not exist and has never existed.
 
Yes, but the technical jump from SC to TTT is much smaller than TTT to DOA3.

ANd yet again, Xbox launched a year and a half after PS2. Btw, if you think there's a huge technical jump between Soul Calibur 2 on the PS2 and DOA3, I beg to disagree...

That is why i dont see any reason why Xbox2 will be any worse than PS3, earlier or not(+/- 1 year).
Has there ever been documented that a computer/console hardware appearing on the market *one year* after another hardware was less powerful? If there was, I'd say such occurences were extremely rare or placed in the non-competing environments.
 
But PS2 -> DC time difference is about there too. :D
THe jump from PS2 to Xbox is bigger than DC to PS2.
Some developers have said that PS2 and DC are actually quite similar.

This is why i am impressed with MS and where i believe Xbox2 will more than match PS3. 8)
 
Xbox2 to PS3 will likely be the same as PS2 to Xbox... The ps2 can keep up with the xbox for a while, but the next generation (Halo2, DOOM3) far surpasses what the PS2 is capable of. If there are 2nd generation Xbox2 titles competing with 1st gen PS3 titles, they will likely look similar, but after a while the PS3 will own the Xbox2.
 
BoddoZerg said:
Xbox2 to PS3 will likely be the same as PS2 to Xbox... The ps2 can keep up with the xbox for a while, but the next generation (Halo2, DOOM3) far surpasses what the PS2 is capable of. If there are 2nd generation Xbox2 titles competing with 1st gen PS3 titles, they will likely look similar, but after a while the PS3 will own the Xbox2.

Huh?
 
Something OT first, I just watched first two episodes of .Hack Sign, and to say the least, I'm very impressed/hooked. I was also interested in the game before, now it's more.
And Archie, seeing that You made me curious about it in that other thread, I blame you if I miss any deadlines in following weeks :p

Back on topic, something that slipped me earlier but I wanted to comment on.
Gubbi,
Bilinear obviously works. I stand corrected on the mipmapping, which then poses another question: Why isn't anybody using it (or if they are why are they using über-aggresive LOD settings) ? Is it expensive ?
On the contrary - mipmapping on PS2 is pretty much a 'must' to get really good performance. Not using mipmaps is just wrong for performance in more ways that I even want to count right now.
It IS true that many, particularly early, titles completely ommited mipmaps - and although I couldn't know exact reasons, I am pretty sure most of them were due to misconceptions about the workings of PS2 hw.
 
It IS true that many, particularly early, titles completely ommited
mipmaps - and although I couldn't know exact reasons, I am pretty sure most of them were due to misconceptions about the workings of PS2 hw

It always amazes me some of the assumptions developers make.
This is why second generation software is better than first generation software, the mental model you have of the hardware is considerably more accurate when you start development of a second title.
You should tell them how much fillrate you loose if your scaling the texture down by a significant margin.
I know given the embedded memory I was somewhat surprised by the figures.
 
ERP said:
It always amazes me some of the assumptions developers make.
This is why second generation software is better than first generation software, the mental model you have of the hardware is considerably more accurate when you start development of a second title.

This seems to be especially true for fellow Xboy fanatic, chap. :oops: ...being a critical armchair conniseur (sp?) of hardware architecture specs (undoubtedly inspired through marketing channels) and having programmed a grand total of ZERO games, yet frequently carries himself as THE industry voice and "insider" to all future events. :oops:
 
Fafalada said:
Something OT first, I just watched first two episodes of .Hack Sign, and to say the least, I'm very impressed/hooked. I was also interested in the game before, now it's more.


I just finished that series a couple of weeks ago. I loved it as well.

Does drag at times, but they captured a certain element; the characters are all stereotypes of Internet personas (for the most part), and fairly accurate ones IMO.

Also as a Phantasy Star Online player I was impressed by the obvious PSO-influences :D
 
Ben,

I'm quickly becoming irratated with your inability to think about things outside of your little scope and percpetions... common now.

The vast majority of future computing uses (ie. total, not just PC based) will be in smaller devices (cell phones, PDA's, digital news papers/cloths/et al) that don't require near instantaneous access and will be used for email, internet, commiunications, simple programs, education purposes, video, ect. It's for these devices that computing will become a utility first.

Unlike you, who I feel may get off one tinkering with your PC and fixing every error that pops up, most people don't. They're going to want to buy a device, plug it in and have it work. They don't want to install continually new versions of MS Encarta, and Windows, and Word, and Office, and any other god damned software that has patches or updates.

For these people (ie. The ones who have lives keeping them too busy to play for hours getting WindowsXP to work), computing as a utility will be a godsend. Have it threw broadband, wireless, who cares. I pay, say, $40 a month and I can plug in (or wirelessly connect) all my electronic devises and they work and communicate with eachother - allways updated and working flawlessly. Hell, it allows for me to access volumes of programs and electronic media seemlessly aswell...

Also, I never said GRID or IBM - so where the hell did that come from? All I mentioned was computing as a utility. I mean, just because you don't see a use for it.
 
Back
Top