Predicting the Xbox-2

Dual Chip solutions are horrible solutions. I already talked about why in another post. It's not the cheapest way and it contains a higher static cost that doesn't scale with lithography - just an overall bad idea.

Much better is to have a 3rd party design a bleeding edge GPU and go with it.
 
The only way I see MS actually putting two GPU's in Xbox2 is if Sony really just runs over anything MS has at that point. Otherwise..
 
Paul said:
The only way I see MS actually putting two GPU's in Xbox2 is if Sony really just runs over anything MS has at that point. Otherwise..

Exactly what Sega did when they realised the power behind the PSX (original PS).

I don't think history will repeat itself this time though. That would be far too embarrassing and even more damaging than it was to Sega.

I do see a Plan B and even a Plan C from MS.

I see only a Plan A and Plan A.1 from Nintendo (I still think they are far too stubborn and out of the loop as to what the CURRENT massgamers want).
 
Tahir said:
Paul said:
The only way I see MS actually putting two GPU's in Xbox2 is if Sony really just runs over anything MS has at that point. Otherwise..

Exactly what Sega did when they realised the power behind the PSX (original PS).

I don't think history will repeat itself this time though. That would be far too embarrassing and even more damaging than it was to Sega.

I do see a Plan B and even a Plan C from MS.

I see only a Plan A and Plan A.1 from Nintendo (I still think they are far too stubborn and out of the loop as to what the CURRENT massgamers want).

of course all 3 graphic card companys that we'd think would have even a remote chance of getting the contract have very good ways to do dual or quad or more gpus. Nvidia has sli . Ati has the tech they used in the rage maxx . They have confirmed that they are still developing it and i'm sure the version they'd use for the x2 would be much much better than what was used in the rage max. Powervr can split the tiles up between the gpus . They did a very good job with the namio (sp?) . Who knows mabye they will design a pure tnl chip like the elan for the xbox 2. To help push insane polygon counts.

Point is there are many ways for ms to catch or surpass sony. Time will tell . As allways i will read up on the tech and then be disapointed with the actual games from all the companys .
 
Ah.. my point was that Sega made a last minute type of decision to add a second CPU. Not quite the same as what may happen with MS. I still think it is smarter to go with one GPU unless as you state jvd you have a 2nd specialised (even more specialised even) sub GPU lessening the load on the main GPU.

Remember VU01, VU02 etc... but instead of making them hard to use and general purpose perhaps the best thing to compete with Sony is to make additional processing power available for specific tasks that are even more minutae than the whole TnL process.

And then at a later stage if you can package the whole lot into one unit that would allow you to at least break even in the future.

(Sorry my turn to ramble).

Edit: bold
 
Tahir said:
Ah.. my point was that Sega made a last minute type of decision to add a second CPU. Not quite the same as what may happen with MS. I still think it is smarter to go with one GPU unless as you state jvd you have a 2nd specialised (even more specialised even) sub GPU lessening the load on the main GPU.

Remember VU01, VU02 etc... but instead of making them hard to use and general purpose perhaps the best thing to compete with Sony is to make additional processing power available for specific tasks that are even more minutae than the whole TnL process.

And then at a later stage if you can package the whole lot into one unit that would allow you to at least break even in the future.

(Sorry my turn to ramble).


Edit: bold

Look at it this way. Lets take the kyro boards. You know its a power vr chip on there . Its specs were about that of a tnt 2 but yet would keep up a geforce 2 ultra at some points.

So think of a series 6 or 7 gpu . IT would require less mhz to do the same amount of work if all things were equal. Not only that but it would use slower ram. That right there would increase yields and reduce cost.

Now you go with two of them. The gpu already splits up a scene into tiles and renders them . So all you would need to do is have the two gpus alternate tiles to render. If you do this at a hardware level the developers don't need to worry about which gpu is doing which. You would get almost double the performance (nothing is 100%) .

Now this is going to be more expensive than one gpu. But instead of having one gpu at say 1ghz you can have two gpus at 800 mhz. Highering the yields and reducing the heat coming off each gpu. Yet you will get much more performance than the 1ghz gpu. Or hell you can put two 1ghz gpu.

with the saturn this was a last min thing and they didn't have hardware that was meant to work together.
 
jvd said:
Yet you will get much more performance than the 1ghz gpu. Or hell you can put two 1ghz gpu.

Are people just avoiding the obvious questions of how to overcome the economies of scale that are horrendous in a multichip solution?

No offense, but the idea is dumb from not only an economic solution, but from a technological one aswell. If a PC IHV, who already practices IP resuse and established architectures, can't match Sony and IBM then there are more serious problems than worrying about multichip. Talk about catagorical break-down and failure of the PC vendor and Microsoft.

You'd think people would heed Baumann's comment to wait before speaking. But again I digress...
 
No offense, but the idea is dumb from not only an economic solution, but from a technological one aswell. If a PC IHV, who already practices IP resuse and established architectures, can't match Sony and IBM then there are more serious problems than worrying about multichip. Talk about catagorical break-down and failure of the PC vendor and Microsoft.

let me just say. I have no doubt ms can catch sony and ibm. I don't know if they can catch , sony ibm and nvidia.
 
Vince, you're right in your reasoning here.

Multi-chip is used where you can't economically have a single chip do the job, but to reach that level is HARD. But that's a necessary condition not a sufficient one. The market also has to absorb PCB cost and whatever extra logic cost there is. These aren't cheap, especially if you want fast chip to chip communication or you decouple them to a very high degree (ATI's AFR) where you have insane overhead.

The entire PC market works on the merchant pricing model, you have huge economies of scaling which absolutely pound manufacturing costs down to acceptable size and allow for retardedly large R&D budgets, this has allowed x86 and a lot of other stupid things to prepetuate.

Additionally, realise that multichip was basically always SUPER high-end in the consumer space and is usually beyond 2 cores except the low-end professional series? This is because the performance required wasn't a die shrink away it was a generation or more away, for single chip that is.

The solution is going to be single core. What I do see happening is either Nvidia OR ATI OR 3DLabs putting out a graphics monster -- lets face it the Nv3x and R3xx are big leaps over their predecessors and the P10 has incredible possibilities.

Right now, I'm sort of leaning towards a creative solution. They can provide both sound and graphics. An improvement over their current P10 could be slip stream, more execution resources, fp, faster clock and a wider bus to feed it and you have one wicked VPU. With the Nv I can see more execution resources, higher clock rates, tear out the legacy and you've got quite the beast. According to Orton the R3xx isn't as big a deal as the next one, which leads me to wonder what that is because the R3xx is definately fantastic.
 
An improvement over their current P10 could be slip stream, more execution resources, fp, faster clock and a wider bus to feed it and you have one wicked VPU.


you think a 512-bit wide bus would be likely for Xbox 2? I'd hope so. 256-bit isnt going to cut it. assuming something like PowerVR or HSR isnt being used. the PS3 will have 1024-bit bus (or 256 x4) for the Cell CPU, and at least that much for the GPU, if not more. PC graphics had 256-bit in 2002 (Parhelia, R300, P10) except Nvidia (NV35) so 512-bit by 2005-2006 should be possible. maybe. although 128-bit lasted a long, long time. I think 128-bit was introduced with the Riva128 in '97.. correct me if that is mistaken. so that's about 5 years from 128-bit to 256-bit.
 
You know, the more I think about it, the more I think PowerVR is a strong possibility. even more than 3DLabs. If MS was willing to use GigaPixel, a real unproven wildcard, for Xbox1 (before they switched to Nvidia) I think PowerVR could be 2nd after ATI, if Nvidia is out of it, or 3rd choice if ATI and Nv are the top 2 choices. PowerVR has more credibility than GigaPixel since PowerVR has had several PC cards, the Dreamcast, Naomi I & II, plus set-top boxes, and the mobile MBX. GigaPixel had nothing except a somewhat impressive demo in 1999-2000.

Imagination will have to prove itself this year with Series 5 though.

A Series 6 or Series 7 for XBox2 could probably get away with using a
256-bit bus and probably GDDR3 DDR-II rather than something more expensive and exotic. remember in 2005-2006 that 256-bit and DDRII will be old.

And although this is hotly debated on Beyond3D, I find Dreamcast, which is driven by the now ancient PowerVR2, to be pretty darn impressive, today. I don't find GC or Xbox games to be leagues beyond the best DC games at all.

well, so much for my case for PowerVR, it's up to Imagination.

Don't get me wrong, I still think ATI and Nvidia are the frontrunners for XBox2 at the moment.
 
Exactly what Sega did when they realised the power behind the PSX (original PS).

I don't think history will repeat itself this time though. That would be far too embarrassing and even more damaging than it was to Sega.

Well, a delay is very likely... I mean ps3 dev. could very well begin by next summer... Ms hasn't even begun to actually materialize their h/w...

IF ps3 were to launch in 2005... ms would need to have h/w for mid 2004 in order to compete... but they're waiting for the ps3 announcement to start true dev..... which won't be until early 2004 :oops: ... now sony seems to have learned, look at psp...

What if they don't give whole specs :?: ... They could just give the specs that are likely to remain unsurpassed, cpu specs, b/w specs... etc...

Well, worst case scenario would be if they decided to keep sony from having a head start(iow nigh simultaneous launch)... and try to significantly out power it at the same time... it wouldn't be pretty(3-4 months game dev. time on actual h/w, super buggy h/w mess... and who knows what else...)

lets face it the Nv3x and R3xx are big leaps over their predecessors and the P10 has incredible possibilities

How big? I know they certainly have more & improved features... but what about peformance... Do they double or triple perf.? I ask this for I keep hearing 15% better... 30-50% better.... and even some of that, even previous perf measurements... seems to have been tarnished do to che@ts...
 
Dual chip or not, its coming in 2006 (at least in Japan):

http://www.gamesindustry.biz/content_page.php?section_name=pub&aid=1814

Ballmer sets 2006 date for Xbox successor

Rob Fahey 13:30 19/06/2003
Will Microsoft be last off the starting line once again?


Speaking in an interview with Japanese journalists, Microsoft CEO Steve Ballmer has pinned down 2006 as the year when a successor to the Xbox will be released - a time frame which could see Xbox 2 being the last of the next generation consoles to appear.

Ballmer acknowledged that sales of the Xbox in Japan have been sluggish (it'd take quite a poker face to claim otherwise, after all), but stated that Microsoft would stick to its long term goals in the region.

It's not clear from Ballmer's comments whether 2006 is the Japanese date for the launch, with Xbox 2 (or Xbox Next, as some sources claim Microsoft is calling the system) arriving earlier in the USA and possibly Europe. Bear in mind that the Xbox launched in 2001 in the USA, and in early 2002 in Europe and Japan.

However, if the 2006 date holds for all territories, this could put Microsoft on the back foot once again in terms of release dates. Nintendo's N5 is widely expected to arrive in late 2005, and Sony's PlayStation 3 should also hit a 2005 date, in Japan at least - although there are concerns over the volume in which the CELL microprocessor can be produced before Sony's Nagasaki Prefecture fabrication plant for the unit comes online. If the volume is low, release dates overseas could well be pushed back.
 
you think a 512-bit wide bus would be likely for Xbox 2? I'd hope so. 256-bit isnt going to cut it. assuming something like PowerVR or HSR isnt being used. the PS3 will have 1024-bit bus (or 256 x4) for the Cell CPU, and at least that much for the GPU, if not more. PC graphics had 256-bit in 2002 (Parhelia, R300, P10) except Nvidia (NV35) so 512-bit by 2005-2006 should be possible. maybe. although 128-bit lasted a long, long time. I think 128-bit was introduced with the Riva128 in '97.. correct me if that is mistaken. so that's about 5 years from 128-bit to 256-bit.

Is there really a need. WIth GDDR, you will probably have have a setup that will likely provide around 50GB/s of memory bandwidth and that's basically a simple doubling of what is currently offered. That's with a 256bit bus and a bit more than doubling the clock rate. DDR-2 runs about twice as fast on the same process, with the advancments of GDDR that should increase operational frequency a fair bit.

During the time frame the X-Box 2 will be manufactured I suspect closer to the 100GB/s range. Not to mention the majority of stuff that's going to be done for games during the Xbox 2 era will likely be very shader intensive and thus I suspect we're looking more at computational bottlenecks. Additionally, ATI can embark into the world of on chip RAM of some sort, they have the experience.
 
Well ATI is certainly in the position to do on-chip memory / embedded memory, since they bought ArtX, who made Flipper, which has embedded memory. although NEC helped with that.
 
OK, so I lied about my one and only post, but I just couldn't pass up posting a reply to this one. ;)

Riddlewire said:
You guys are way off.
The entire system is going to be a single chip SoC from Transmeta!

Although I believe you're being sarcastic, this is one company I thought of that could possibly do it. Funny thing is that S3 has long been associated with Transmeta and graphics for Transmeta devices. Transmeta has also released the TM8000(aka Astro) processor that includes a built-in AGP-4x interface.

They're still a long-shot, and that's one of the reasons I didn't initially include them in my list. Same goes for some of the others that people have mentioned(CagEnt, Rendition, BitBoys, etc.).

Tommy McClain
 
Back
Top