MS wants XBox2 out before PS3?

Their current free-to-play attitude is more of a temp measure and a coverup, due to the PS2 hardware limitation(ie no built-in online parts).

What does free-to-play have to do with an RJ-45/11 PHY not being built onto the hardware? Mind elaborating on that a little more (or perhaps sharing some of that glue with the rest of us)?

Isn't the GPU in Xbox basically a slightly modified GF3 though?

Wanna show me a GeForce3 with two vertex shaders, an AGTL and a HyperTransport bus?

The BB Navigator software (you can download game demo and view new games information on the game channels) also need the hard disk and the BB Unit.

I forgot about the BB Navigator... My VF4 is the US one so I haven't tried it with an HD.

Of all the consoles, my understanding is only the xbox gives you this basically for free.

Considering DTS Interactive is relatively 'free' (4-6% on the cpu), I'm not sure that's much of a big deal... As for the GCN, since it lacks any digital audio output (not including the Q), it's not something that you're going to worry about. At most you'd just mixdown your positional audio to DPLII...

Anyway my impression is that I don't think any of the other consoles have audio hardware that's quite at this level of sophistication, but I could be totally wrong.

Well not in a single chip. You'll get no argument over the relative capabilities of the MCP (I enjoyed by brief time on an XDK and spent many hours on the one on my roommate's nForce system). Now the most significant things about the APU IMO (since that's the portion of the MCP we're talking about) are the sheer number of channels, DICE, and the setup engine. In comparison say, to the PS2 you've got the choice of a couple of output formats (it really depends how your game leverages the hardware, and how wide an audience you wanna target, i.e. more people are going to benefit from Surround or DPLII, than DTS), of which none really put a significant load on the CPU. The IOP pretty much does with the APU setup engine does. As far as signal processing goes, they're fundamentally the same (i.e. setting up banks of audio channels and ping-pong samples between them). It's just in the case of SPU2 you've got 2 cores (CORE0, CORE1) with 24 channels to set up (you can do more via software), whereas with the MCP you've got 8 sets with 32 channels each. So in that aspect the APU pretty much beats the pants off of pretty much anything out there short of professional gear...

If we look at pain free devices that compromise the functionality of a PC, they have already been tried and failed(WebTV likely the most noteable- delivered everything it promised and it wasn't close to enough).

Conversely if you look at the functionality of the PC that compromises many of the pain free devices we use today, for all it's power and flexability it's failed to replace them...

This has already been tried and backed by tens of millions in marketing and it was built around an honest 'turn it on and go' product, WebTV. It was significantly cheaper then a PC and still failed to gain widespread acceptance. Why? It compromised functionality. Despite people's prime concern being access to the net the inability to handle other operations killed any chance WTV ever had in terms of marketplace acceptance.

Well I can counter your WebTV argument with DoCoMo... However one fundamental problem with your argument here is that you're comparing a device to a service. So if we compare say device to device, I'll use game consoles as a counter argument. Game consoles have been a pretty much 'functionally limited' device used for the sole purpose of playing videogames. Yet the PC has been around doing the same thing for just as long, and some would argue does a better job (not to mention offers more options in terms of game expansion and end-user contribution), yet the game console market hasn't died, or failed, but rather flourished (with some arguing at the expense of PC gaming)...

As far as the whole grid computing thing goes. It's not like it can be utilized in every single aspect of all games. However it does display an awful lot of potential for massively multiplayer persistent online worlds whether it be an Everquest/Galaxies type, or a persistent online Sims, or Starcraft...

When it comes to designing a 3D rasterizer? The i740 when first launched was supposed to be a high end part(complete with the price tag) and was supposed to set a new standard in 3D graphics. It failed miserably. How many companies in the world have proven that they can produce a feature complete 'GPU'? I can only think of a small handful; 3DLabs, SGI, ATi and nVidia.

The i740 was never intended to be a high-end part. You're confusing it with the R3D-1000 and 100 (the i740 was derived from the 100). As for it's success for failure, I'd say it did fair. It offered comparable performance to it's primary competitor (the Riva128 and ZX) with better image quality (and was cheaper), and outclassed pretty much everything else (Rage II, Rage Pro, Rendition, M3D, Voodoo) 'till Matrox released the G200. Of course there was the Voodoo2 which was in a class of it's own (which cost a lot for one let alone 2 for SLI, and didn't offer any 2D either).

As for feature complete 'GPU', mind pointing out the specifications for what a 'feature complete GPU entails?' It would sure be interesting since Nvidia coined the term 'GPU' with the NV10 (yet still product 'GPUs' with more 'features'). I guess one could just say that Nvidia is the only one in that category. I guess if you want to be generous you could include Matrox and ATi. SGI has never really made any 'GPUs', and 3Dlabs doesn't either (they've got a lot of high-end solutions which comprise many chips, dunno if any one of the are GPU, or course they've got those new-fangled 'VPU' thingies though)... ;) :p

(BTW, I kinda forgot that the Wildcat lineup came from Intergraph, and the Oxygen line came from Dynamic Pictures...)

I guess if you want to get a little more broad-minded and generous, you could say Nvidia, ATi, Matrox, 3Dlabs, VideoLogic/PowerVR, Intel, SiS, Via, SGI, HP, IBM, Sun, TI, Fujitsu, Hitachi, Mitsubishi, E&S (they still around?), and to be really nice I'll throw in Sony and Toshiba have all comprised significant enough contributions to 3D (at least on the hardware side) to be considered 'experienced'...

Why did I state that an Intel platform would be more difficult to work with? Based on their history I don't see them coming up with a feature complete part capable of exploiting the HLSL in up to date(let alone ahead of the curve) DX revisions.

I'd think the off and on animosities between the two would be a bigger obstruction. As for "the HLSL" which would that be? Last time I checked there was a single standard. Hell I can think of several off the top of my head (Renderman, Pfman, RTSL, ISL, ESMTL, Cg, whatever's in OpenGL 2.0)?

They have failed to release a DX7 level part to date, I don't see them pulling out the engineers to accomplish such a task when they specialize in processor centric applications.

Perhaps because they do quite fine (downright dominant) with the capabilities of their core-logic designs? Again, if it wasn't a significant market why would Nvidia, ATi (Via, and SiS for that matter) be getting so involved with that sector? In case you haven't noticed, the high-end consumer 3D hardware market isn't all that large, it's costly, and not very profitable. At best it's good for pride, and if it's your core business and you're good at it then it can give you some good 'trickle-down' hardware at the low-end level and if the parts are good enough some business at the high-margin pro-level.

As for Intel 'pulling' engineers. Perhaps from an existing project per se. Of course they've got so much stuff going on, I doubt they don't have the engineering resources for such a task. In case you haven't notices, it's not a far reach to go from CPU design to 'GPU' design (one could argue that CPU is probably more difficult at the logic level). Of course there's the analog side to GPU design as well, but considering Intel's efforts to catch up to IBM in mixed signal processes I don't think that aspect would be too difficult. And they've obviously got the software knowhow... (And yes I know 3DR was weak sauce).

If the design were to fall to Intel I see them almost assuredly following Sony's design theme(reliance on CPU power to fill in for GPU functionality). Even with their sole attempt to enter the high end consumer 3D market they relied on other platform technology to help them cover design issues(the i740 add in AGP boards had no on board texture memory... WTF were they thinking?).

Well one can see Intel focussing on a 'CPU' centric theme (after all they've pushed it with their CPU extensions). Also, you're idea's about Sony's design is somewhat flawed in comparison. Intel's extensions have relied on using the CPU's execution resources to accomplish all the computation, whereas Sony has gone the route of relying heavily on dedicated execution resources to avoid being CPU bound (while providing functionality to allow the CPU to have control and direct access to some of the dedicated functionality).

As for what Intel was thinking on the i740. How about actually utilizing the full spec of the bus? I mean besides it, the Rage chips were the only other ones to really leverage AGP. And I'm afraid I'm going to have to call bullshit on the lack of texture memory on the i740 boards. I had a Starfighter and it had 8MB of memory. Now if you specified DME in it's DX settings, then it would allocate main memory for textures. You could still DMA textures to the board for execution however. That had to be allowed for the PCI cards to get any texture data, of course those were weird as they had 16-24MB of memory (8-16MB allocated for textures) in which half or more was made to believe it was behind the AGP bus... Anyway at that time DME wasn't so bad as the differential in memory performance between graphics cards and main memory wasn't nearly as bad as it is today...

I was also interested in the game before, now it's more.

I was rather interested in their approach to releasing the game in pieces across 4 discs...

And Archie, seeing that You made me curious about it in that other thread, I blame you if I miss any deadlines in following weeks

Sorry? :p
 
Wanna show me a GeForce3 with two vertex shaders, an AGTL and a HyperTransport bus?

NV2A is based on the GeForce3 design with some functions taken out and some added to facilitate it's console centric target platform. In other words it's a modified NV20 therefore the development costs came mostly from the NV20.
 
[quote="archie4oz
Considering DTS Interactive is relatively 'free' (4-6% on the cpu), I'm not sure that's much of a big deal... As for the GCN, since it lacks any digital audio output (not including the Q), it's not something that you're going to worry about. At most you'd just mixdown your positional audio to DPLII...

Actually that is wrong. Every Gamecube made has capability for digital audio output. It outputs digital audio via the digital A/V output jack, in DAI format. You can build a small board to convert that to S/PDIF, and when connected to a decoder, you get 48KHz 16bit stereo PCM.
 
Kudos to Nintendo for cutting corners! That said, its highly unlikely that a GC will ever find its way into my room, simply due to the lack of an spdif (yes I love my receiver that much).
 
zurich said:
Ozy, I bet it ran off with the Dolby Digital cutscenes and got married ;)

zurich

Did we get the DD cutscenes or not? I thought we did... haven't used my Ps2 on a DD receiver yet :(

I know the Bouncer had them :)
 
Actually that is wrong. Every Gamecube made has capability for digital audio output. It outputs digital audio via the digital A/V output jack, in DAI format. You can build a small board to convert that to S/PDIF, and when connected to a decoder, you get 48KHz 16bit stereo PCM.

great! now if a big enough group of us GC owners install this small adapter, the developers will support it? :D
 
Ozy: NOPE, no DD cutscenes.

MGS2 had the most laughable DD support out of anything though.. options to turn it on all over the menu, the DD logo printed on the box and DVD, etc etc, and yet, what is there? The intro (mildly cool), and the ending JAZZ SONG. Man oh man, what a load of crap.

I think I would have preferred the cut scenes to be rendered in game, yet recorded into mpeg (like XenoSaga), and then have DD playback. But I guess Kojima-san really wanted to prove to the world that his game was real-time:p
 
zurich said:
Kudos to Nintendo for cutting corners! That said, its highly unlikely that a GC will ever find its way into my room, simply due to the lack of an spdif (yes I love my receiver that much).

Well, I have a $1,200 receiver, and I use my Gamecube on it...ProLogic2 is actually pretty nice.

I'm kinda surprised Nintendo didn't add an SPDIF converter to the component video cable kit though.
 
MrSingh said:
Actually that is wrong. Every Gamecube made has capability for digital audio output. It outputs digital audio via the digital A/V output jack, in DAI format. You can build a small board to convert that to S/PDIF, and when connected to a decoder, you get 48KHz 16bit stereo PCM.

great! now if a big enough group of us GC owners install this small adapter, the developers will support it? :D

Well, it's supported in all games in a technical sense...but it would probably be easy for a dev to add a DTS encoder.
 
Vince-

I'm quickly becoming irratated with your inability to think about things outside of your little scope and percpetions... common now.

I'm thinking about this thing called reality. It is not my scope or perception, it is the marketplace reality.

The vast majority of future computing uses (ie. total, not just PC based) will be in smaller devices (cell phones, PDA's, digital news papers/cloths/et al) that don't require near instantaneous access and will be used for email, internet, commiunications, simple programs, education purposes, video, ect. It's for these devices that computing will become a utility first.

These things all work now. Tell me how they would be improved by you paying $40 a month?

Unlike you, who I feel may get off one tinkering with your PC and fixing every error that pops up, most people don't. They're going to want to buy a device, plug it in and have it work. They don't want to install continually new versions of MS Encarta, and Windows, and Word, and Office, and any other god damned software that has patches or updates.

This is likely your biggest problem with the vision you have. You think that the only way to achieve this is by having computers act as a utility? Auto updating software already exists(and has for some time for that matter). It is your vision that is very narrow here. You are implying that this can only be done if computing becomes a utility. I am saying that the two things are entirely different subjects.

For these people (ie. The ones who have lives keeping them too busy to play for hours getting WindowsXP to work), computing as a utility will be a godsend. Have it threw broadband, wireless, who cares. I pay, say, $40 a month and I can plug in (or wirelessly connect) all my electronic devises and they work and communicate with eachother - allways updated and working flawlessly. Hell, it allows for me to access volumes of programs and electronic media seemlessly aswell...

None of this is tied in to computing becoming a utility. You are talking about two completely seperate things as if they were one and the same. Hassle free computing is something I see as entirely desireable, I would like you to explain why that would be exclusive to computing becoming a utility.

Access to electronic media is a good example where you are dealing with loads of different corporations, having one of them involved with a utility structure would likely make things more complicated then they would be on an open market.

I mean, just because you don't see a use for it.

Tell me the use for it that is exclusive to it. You tell me what you could do having computing as a utility versus not.

Archie-

So if we compare say device to device, I'll use game consoles as a counter argument. Game consoles have been a pretty much 'functionally limited' device used for the sole purpose of playing videogames. Yet the PC has been around doing the same thing for just as long, and some would argue does a better job (not to mention offers more options in terms of game expansion and end-user contribution), yet the game console market hasn't died, or failed, but rather flourished (with some arguing at the expense of PC gaming)...

Start charging people a monthly fee for consoles and watch what happens.

As far as the whole grid computing thing goes. It's not like it can be utilized in every single aspect of all games. However it does display an awful lot of potential for massively multiplayer persistent online worlds whether it be an Everquest/Galaxies type, or a persistent online Sims, or Starcraft...

OK, so you have a use for one millions computer users out of a billion plus.

The i740 was never intended to be a high-end part. You're confusing it with the R3D-1000 and 100 (the i740 was derived from the 100).

I still have the print publications from when it launched as the i740 in the Starfighter games.

It offered comparable performance to it's primary competitor (the Riva128 and ZX) with better image quality (and was cheaper), and outclassed pretty much everything else (Rage II, Rage Pro, Rendition, M3D, Voodoo) 'till Matrox released the G200. Of course there was the Voodoo2 which was in a class of it's own (which cost a lot for one let alone 2 for SLI, and didn't offer any 2D either).

The V2 was the same price as the Starfighter when it launched(I still have the original comparisons).

As for feature complete 'GPU', mind pointing out the specifications for what a 'feature complete GPU entails?'

Full DX feature set. Given that we are talking about the XBox I mistakenly assumed that would be a given. As far as using the term GPU, it is quicker then typing out "graphics rasterizer chip" over and over so I will continue to use it :)

I guess if you want to get a little more broad-minded and generous, you could say Nvidia, ATi, Matrox, 3Dlabs, VideoLogic/PowerVR, Intel, SiS, Via, SGI, HP, IBM, Sun, TI, Fujitsu, Hitachi, Mitsubishi, E&S (they still around?), and to be really nice I'll throw in Sony and Toshiba have all comprised significant enough contributions to 3D (at least on the hardware side) to be considered 'experienced'...

Matrox hasn't raised the bar since the G400, and even that had only a couple of months prior to the GF hitting. PVR has never been close to having a complete(class leading or at the minimum full DX feature set at the time of release) feature set. SiS and Via have always been decidedly low end. HP's fx line, while extremely powerful for its limited market, does not cut it as a viable solution for gaming graphics. Sun, IBM and E&S(they are still around- or at least they were last I was aware) aren't competitive.

As for "the HLSL" which would that be? Last time I checked there was a single standard. Hell I can think of several off the top of my head (Renderman, Pfman, RTSL, ISL, ESMTL, Cg, whatever's in OpenGL 2.0)?

Apologies again, I thought it would be a given that I was talking about DX's HLSL.

Perhaps because they do quite fine (downright dominant) with the capabilities of their core-logic designs? Again, if it wasn't a significant market why would Nvidia, ATi (Via, and SiS for that matter) be getting so involved with that sector?

That is akin saying Hyundai(sic?) could compete in the LeMans series it's just they don't because they make money in their current market. Intel does not have the hundreds of engineers who specialize in the particular application we are discussing.

In case you haven't notices, it's not a far reach to go from CPU design to 'GPU' design (one could argue that CPU is probably more difficult at the logic level). Of course there's the analog side to GPU design as well, but considering Intel's efforts to catch up to IBM in mixed signal processes I don't think that aspect would be too difficult. And they've obviously got the software knowhow... (And yes I know 3DR was weak sauce).

Actually, it is an enormous difference switching between CPU design and 'GPU'. CPU design at Intel has a roughly five year product cycle and every transistor is hand tuned where 'GPU' design has about one third that amount of time and most of it isn't hand tweaked(that's a paraphrase from an Intel engineer who has been working on IA64 for some time, I've had this discussion before). The entire design philosophy is significantly altered along with the execution of it.

Well one can see Intel focussing on a 'CPU' centric theme (after all they've pushed it with their CPU extensions). Also, you're idea's about Sony's design is somewhat flawed in comparison. Intel's extensions have relied on using the CPU's execution resources to accomplish all the computation, whereas Sony has gone the route of relying heavily on dedicated execution resources to avoid being CPU bound (while providing functionality to allow the CPU to have control and direct access to some of the dedicated functionality).

So you don't consider the EE in its entirety the CPU of the PS2?

And I'm afraid I'm going to have to call bullshit on the lack of texture memory on the i740 boards. I had a Starfighter and it had 8MB of memory. Now if you specified DME in it's DX settings, then it would allocate main memory for textures. You could still DMA textures to the board for execution however. That had to be allowed for the PCI cards to get any texture data, of course those were weird as they had 16-24MB of memory (8-16MB allocated for textures) in which half or more was made to believe it was behind the AGP bus...

Intel is the one that made the claim of not having on board texture memory for the AGP parts(again, I'm going by print publications from the time and Intel's own quotes).

Anyway at that time DME wasn't so bad as the differential in memory performance between graphics cards and main memory wasn't nearly as bad as it is today...

Roughly half as fast.
 
Start charging people a monthly fee for consoles and watch what happens.

Well people do it for TV even though TV is available for free (hehehe depending on where you live)... :p

The V2 was the same price as the Starfighter when it launched(I still have the original comparisons).

The only ones I remember being in the same price range as the Voodoo2 was the PCI Starfighters (16 and 24MB). I paid $129 for my AGP Starfighter (8MB)...

Matrox hasn't raised the bar since the G400, and even that had only a couple of months prior to the GF hitting. PVR has never been close to having a complete(class leading or at the minimum full DX feature set at the time of release) feature set. SiS and Via have always been decidedly low end. HP's fx line, while extremely powerful for its limited market, does not cut it as a viable solution for gaming graphics. Sun, IBM and E&S(they are still around- or at least they were last I was aware) aren't competitive.

Well feature-wise, Parhelia raised the bar (albeit temporarily). As for some of the others, even though PowerVR is focussing on embedded cores now, they did at least do a console part. SiS and Via don't look to be staying low-end forever (well SiS at least looks to be pushing the Xabre line more). As for HP, Sun, and IBM (did you forget about Sun's Creator3D, Elite3D Expert3D, and XVR1000 boards? Or IBM's PowerGXT accelerators? They're along the same lines as HP's fx), while mainly niche high-end are simply mentioned to point out the Nvidia, ATi, and SGI don't have a monopoly on 3D. After all ArtX was pretty much a nobody with an N-bridge controller that flopped, yet went on to produce Flipper...

That is akin saying Hyundai(sic?) could compete in the LeMans series it's just they don't because they make money in their current market. Intel does not have the hundreds of engineers who specialize in the particular application we are discussing.

Well they could... They participate in WRC (along with several other rally series) and various sports car series. Mazda's racing record was rather abysmal and non-existent (despite actually producing a couple low-cost sports cars), yet they went on to be the only Japanese auto manufacturer to actually win the 24 heurs du Man, whereas Toyota and Nissan, having much more illustrious racing histories and devoted far more resources at LeMans, yet neither have won (although Nissan finishing all 4 entries in the top 10 was *very* impressive). Look at MG, gone for eons then they come out with the hottest LMP675 prototype out there (that's able beat the pants off of some of the LMP900 prototypes above it). The same could be said for Bentley as well regarding their EXP Speed8 in the LMGTP class...

Actually, it is an enormous difference switching between CPU design and 'GPU'. CPU design at Intel has a roughly five year product cycle and every transistor is hand tuned where 'GPU' design has about one third that amount of time and most of it isn't hand tweaked(that's a paraphrase from an Intel engineer who has been working on IA64 for some time, I've had this discussion before). The entire design philosophy is significantly altered along with the execution of it.

Well considering how many different processors Intels released over the past 'couple' of years, and how much Matrox has milked the G400 core, and Nvidia's milked their register combiners the whole life-cycle argument is kinda weak. As for hand-tweaking the architecture, it does indeed happen on GPUs (just not as much).

It is funny that you mention IA-64, since it basically tries to take general computing along the GPU path of massive resources and parallelism. Take an R300 and an Itanium2 what do you have? Essentially a big, fat parallel processor with lots of resources (registers, caches, execution units), that steps through in-order, predicated data. Hey this is not problem for the R300. But an Itanium2 has a problem, programmers (and more explicitely, their compilers) are throwing a ton of small, branchy, code blocks with all sorts of memory dependencies. Get rid of that problem, and suddenly all the ILP extraction hardware (and compiler sophistication) isn't needed and you can devote more resources to deal with data computation...

So you don't consider the EE in its entirety the CPU of the PS2?

One could call it that for lack of a better word, I call the EE, the "EE." I consider the EEcore to be the "CPU."

Roughly half as fast.

Well at least on the i740 it wasn't even that bad... Depended on the memory you had (some had 66MHz SDRAM, some 100MHz). My Starfighter had 100MHz SGRAM and my PC had PC66 (PC100 came out shortly later)... DME did have it's plusses at the time as well (mainly giving your GPU a crap load of bandwidth)...

Oh and BTW, IIRC Intel does have a DX7 part in the 845G (dunno about the 830, and 810). It supports DXTn, cube maps, DOT3 bump-mapping, point sprites, multi-texturing (4-stage), etc... Not exactly a real barn-burner, but at least it's a lot more modern than the i740...
 
Well people do it for TV even though TV is available for free (hehehe depending on where you live)...

Don't know about where you live, but where I live I can grab a whopping two channels for free :)

The only ones I remember being in the same price range as the Voodoo2 was the PCI Starfighters (16 and 24MB). I paid $129 for my AGP Starfighter (8MB)...

List price that I have states $199(although that isn't street, neither was the V2's $199 tag).

Well feature-wise, Parhelia raised the bar (albeit temporarily).

I thought P10 hit first?

did you forget about Sun's Creator3D, Elite3D Expert3D, and XVR1000 boards? Or IBM's PowerGXT accelerators? They're along the same lines as HP's fx

Didn't forget about them, haven't seen one that is competitive in a couple of years(compared to fx or even the Wildcats).

while mainly niche high-end are simply mentioned to point out the Nvidia, ATi, and SGI don't have a monopoly on 3D. After all ArtX was pretty much a nobody with an N-bridge controller that flopped, yet went on to produce Flipper...

The ArtX team was SGI :)

Well considering how many different processors Intels released over the past 'couple' of years, and how much Matrox has milked the G400 core, and Nvidia's milked their register combiners the whole life-cycle argument is kinda weak. As for hand-tweaking the architecture, it does indeed happen on GPUs (just not as much).

The life cycle difference I bring up as Intel tends to hand tune everything and takes ~five years to complete a project(although they have dozens of projects under design at once). They don't have the experience attempting to compete with the specialists in the field. nVidia and ATi have rather large advantages in nearly all areas of 3D(those two in particular, although SGI seems to be holding on to their niche).

Look to Intergraph(who had to sell out to 3DL), 3DLabs, Sun, DEC, IBM and HP(SGI too though they haven't fallen as badly) who utterly dominated the high end 3D market up until three years ago. Now, the mass market companies are threatening to eclipse nearly all of their advantages for the pro markets simply as a by product of advancing consumer/gamer 3D cards. The scales of economy have had nV and ATi hiring all(well, as much as they can) of the top talent in the industry. They simply have significantly more money then the other player's in the 3D arena. Now if we were talking about Intel making an honest attempt at entering the 3D market I think they certainly could do it, but them landing the XB2 deal without a more serious stance for the industry at large I see them relying on platform technologies over raw 'GPU' power which is what MS is very clearly planning on(look at DX).

It is funny that you mention IA-64, since it basically tries to take general computing along the GPU path of massive resources and parallelism. Take an R300 and an Itanium2 what do you have? Essentially a big, fat parallel processor with lots of resources (registers, caches, execution units), that steps through in-order, predicated data.

And it would appear to lend itself quite nicely to software computation of what would be 'GPU' tasks doesn't it ;)

One could call it that for lack of a better word, I call the EE, the "EE." I consider the EEcore to be the "CPU."

Fair enough. From a coders standpoint I would assume that you would have to tend to pay a decent amount of attention to what is handling what within the EE.

Well at least on the i740 it wasn't even that bad... Depended on the memory you had (some had 66MHz SDRAM, some 100MHz). My Starfighter had 100MHz SGRAM and my PC had PC66 (PC100 came out shortly later)... DME did have it's plusses at the time as well (mainly giving your GPU a crap load of bandwidth)...

PC100, in a theoretical sense, didn't offer any edge as AGP 2x was limited to ~512MB(real world obviously you had other devices utilizing memory although this was almost certain to cover more then the gap between the two in terms of bandwith). IIRC, wasn't 128bit bus standard on graphics cards by that point? If so, even assuming you hit peak AGP 2x rates all the time, it still was less then half as fast as on board RAM.

Oh and BTW, IIRC Intel does have a DX7 part in the 845G (dunno about the 830, and 810). It supports DXTn, cube maps, DOT3 bump-mapping, point sprites, multi-texturing (4-stage), etc... Not exactly a real barn-burner, but at least it's a lot more modern than the i740...

Do you have a link? Honestly curious here as everything I've seen on the 845G states no hard TnL and no EMBM(not that the latter is a major issue).
 
Don't know about where you live, but where I live I can grab a whopping two channels for free

That's sort of why I threw in the qualifier. I used my parent's place, since the Inland Empire (in conjunction with LA, Ventura, and Orange counties) is a pretty massive sprawl of area, where you can typically get 10-20 channels on the air. Of course no matter where you are you can always seemingly depend on televangelism to provide you with a channel... :-?

Of course in Japan you're practically guaranteed to be able to get NHK, but you *do* have to pay for that... :(

List price that I have states $199(although that isn't street, neither was the V2's $199 tag).

Really? I always remembered them easily being over $250 (12MB version), hence the infamous $600 for SLI rigs...

I thought P10 hit first?

You might be right, although I believe some of Parhelia's functionality was more accessible, whereas the P10 is one to be explored...

The ArtX team was SGI

Well a bunch of SGI guys... But you can pretty much find SGI guys all over the place...

Now if we were talking about Intel making an honest attempt at entering the 3D market I think they certainly could do it, but them landing the XB2 deal without a more serious stance for the industry at large I see them relying on platform technologies over raw 'GPU' power which is what MS is very clearly planning on(look at DX).

Well I *was* implying an 'honest' attempt. Although mainly at them building something similar to Nforce (although even more capable) to fill in the higher end integrated market...

PC100, in a theoretical sense, didn't offer any edge as AGP 2x was limited to ~512MB(real world obviously you had other devices utilizing memory although this was almost certain to cover more then the gap between the two in terms of bandwith). IIRC, wasn't 128bit bus standard on graphics cards by that point? If so, even assuming you hit peak AGP 2x rates all the time, it still was less then half as fast as on board RAM.

Actually it did as it could fill AGP command and data buffers faster than PC66 (even though the bus transfer would be the same). Plus PC100 could service more or longer data tenures to it's clients in same amount of time. Also, considering the size of on-board memory at the time (4MB-8MB) texture page misses weren't exactly uncommon. DME in this respect was definitely better as it performed operations on a frame basis, and could release bus grants quicker than a DMA'd texture across AGP (or worse, PCI).

Also it had a 64-bit bus too. 128-bit busses didn't start appearing until the TNT/TNT2, G400, and Rage128...

Do you have a link? Honestly curious here as everything I've seen on the 845G states no hard TnL and no EMBM(not that the latter is a major issue).

Well it doesn't have TnL, that's what the big, nasty P4 is for (and fits nicely with your CPU centric model for Intel). ;)

Besides, fixed-function TnL would've been a waste for low-cost integrated graphics. Besides it would go idle anyway with any vertex shader based title...

Oh and here's you're feature link:
http://developer.intel.com/support/....htm?iid=ipp_dlc_chip_graphics+info_2d3d&
 
We'll i'll just resurrect this thread... with something that came to my mind recently....

If sony plays it's cards right MS even if they manage to outdo ps3 in some areas won't be able to brag about it, or even hype it.

Why do i believe this, u ask?

Well here goes....

We have all seen how the ps2 outdoes the xbox in some areas, and it's still uneclipsed in a few areas even by the latest gpus which have come nearly 3yrs later... areas like fillrate and bandwith have not been significantly surpassed(several fold increase...) considering the amount of time that has passed.

On another note also see that MS used their Mhz to boast about their cpu and claim it to be more powerful than the ps2 cpu.

We see too that ps2 severily lagged in the gpu features arena, and that the gs v-ram and performance was also hindered do to low manufacturing tech...

Now what does this all mean?

If sony does ps3 press releases right, MS won't be capable of outdoing them...

For example instead of giving Mhz speed, they just completely obmit that and concentrate on the "SUPER computah on a chip", they give many stats showing it to be among the top... i dunno 100-10 supah computahs... they say how it's dozens of times better than the latest intel pentiums etc.... this will basically guarantee that any pentium MS touts will be laughable...

As for the gpu area sony just has to focus on peak specs that aren't likely to be surpasses, fillrate, v-ram bandwith, poly rate(if they do achieve 75B peak, they should just give that figure, and forget about it.), etc... again not mentioning areas were they'd be surpassed...

Now to the demos, head demo, car demos, etc... many like the head will clearly outdo anything outhere, nearly undistinguishable from the real thing... people would think all char.s would be that level of detail... and the cars should be nigh photoreal... This would guarantee that no demo MS threw afterward would outshine ps3's demos...

What would this all do?

When MS announced their xbox2 it would just be a ME TOO... a ME TOO without GTA... a METOO WITHOUT a SUPAH COMPUTAH and with an average desktop cpu in the eyes of the media... with no noticeable improvement above ps3... it would go really nasty....
 
[off-topic]
Archie - if Lockheed had brought out R3D-100 as a consumer/gamer 3D card and used it against Voodoo1 and even Voodoo2, how well do you think it would have done, in terms of performance, feature set, image quality, ease of development, etc? I pretty much have my own idea but wanted to see if other people's thoughts were similar.


You think developers would have embraced R3D-100 instead of Voodoo/Glide, if priced at $180-200 ....the price originally mentioned in 1995 for a Lockheed PC card, but wasn't actually for -100, it would be the later i740.

Also, you say i740 was derived from R3D-100. I wonder how much. No doubt R3D-100 was in a totaly different class than i740.

From what I understand, i740 was a single chip - it was also just a 3D accelerator, it lacked a geometry processor (what would later be called a T&L unit in the Nv GPU era) where as the R3D-100 was a complete graphics processing chipset, which included a seperate geometry processor, in addition to graphics processor and texture processor. So unlike the i740, the -100 provided its own geometry and lighting, not burdening the CPU for those things. (also unlike Voodoo and all the others)

I strongly believe if -100 was introduced intact in 1996 or even as late as 1997, as a gaming card it would have provided extremely good polygon throughput on almost any CPU, along with unrivaled texture mapping & image quality.

IIRC, R3D-100 specs were 750,000 polys/sec with every feature on.
33M pixels/sec ....not as low as it sounds to the typical PC gamer given the features & image quality.... no doubt these specs were much more robust and held up extremely well in reality compared to other 3D accelerators with higher paper specs such as 3Dfx, ATI, Nv, Rendition, PowerVR, Trident, S3, etc.

It's too bad that the $180-$200 LM graphics card turned out to be the i740 and not the R3D-100. I would have bought a -100 without hesitation even over Voodoo2.

I'm pretty certain from everything I've read over the years that Sega had concidered both the R3D-100 and the i740 (and probably derivatives of both) for a console chipset.

what a shame LM R3D never really made it in the consumer market. they shined so brightly in arcades. At least some of their engineers are now at ATI (i think) perhaps some of talent was put to use in R300/Radeon9700.
 
One thing I've noticed about your posts Megadrive, is that you looooooove "what ifs" and phantom specs. :)

Not having a go, just commenting. I find it interesting to read all your ideas on what could have happened.
 
Hi Ben,

You must remember that you are speaking from the perspective of an IT person. My parents haven't even heard of the X-BOX *Gasp*. Give them a computer, the most they can do is turn it on and find Winword in the start menu. If it fails to post, or the HDD fails, they don't have a first damn clue what to do.

You say that computing as a utility is independent of trouble free computer, I disagree. As long as the basic computing hardware (HDD, Motherboard, CPU, etc) is at the consumer's end, there will be never ending trouble. Software has improved dramatically recently (Xp is a God send) but hardware still fails and gives totally random errors (sometimes freezes, sometimes not. take it to a repair shop, problem disapperas, take it back, frozen again) that ordinary consumers just can't stand. We are here talking because we've all been through the shits, people *SHOULDN"T* have to go through it in this 'information age'.

By separating the PC hardware from the consumer and instead getting them to use the information through a reliable, un-expandable device, you basically eradicate all problems. No more motherboards failing, no more CPU over heating, no more HDD clicking to death etc etc.

The bottom line is, no matter how much more friendly software gets, consumers will still suffer issues which can't be solved without re-thinking the entire 'computing' architecture. If the burden of operating such a device is reduced to a level of a Palm handheld, then we have pretty much opend the market to x fold more peope at many magnitudes the ease.

We can all agree that computers will never get "fast enough"; we'll always want more speed. That rules out having a non-expandable 'computing' platforms at the consumer's end - they can't upgrade. If they use a expandable platform, they'll run into problems, no questions asked. People NEED to be separated from the hardware. Hardware idealy should be managed by professionals on the back end, with the consumer accessing what the WANT, be it information, entertainment, whatever.

As SA said a while back. People don't buy a computer for the physical item, they buy it for the power to COMPUTE. So if the POWER to COMPUTE is available troublefree through the airwaves, who'd choose a clumsy, troublesome desktop box?
 
Back
Top