X-Box 2 Speculation!

Must have because how Xbox1 got raped by PS2 in fluid/clothing/limb animations, *looks at SC2* *looks at DOA3* oh wait.....

There's a huge difference between 6Gflops and 1TFlop-13TFlop perf... the later provides sufficient resources to do incredible physics, etc...
 
Evildeus said:
Brimstone said:
As far as the GPU goes, I have doubts if either ATI or Nvidia will get the contract. Cost seems to be a huge concern for Microsoft and a company like Power VR or 3d labs will produce a GPU design at a lower cost. If Microsoft wants performance though (cutting edge features and raw power), I don't see them having much choice but to choose Nvidia.
Well i don't see how you come to this statement. Till, they are not really on the market ATM, and if it was the case they would be the first provider on the graphic market, which is not the case. :?


Nvidia has big fish to fry in the PC market, so if MS wants NV to divert attention away from that market (and some others) they will have to pay out a handsome amount of money. Some analysts have concluded that making the NV2A for the X-Box was a mistake, and the same thing will result again if they commit to a XB-2. Of course it equation would change if MS was offering a lot of money for the contract, but this is something MS isn't going to do unless something radical happens to cause them to shift their mindset. NV wants lots of money for its design and MS doesn't want to spend much. Its doubtful either side will budge.

The issue with ATI would be them stretching themselves to thin and trust. ATI already has a relationship with Nintendo (the trust aspect) and have some sort of contract with them (being stretched thin).

With either 3d Labs (Creative Labs) or Power VR you have companies that could see a serious boost in revenues/profits with a XB-2 contract. I have no idea where Pover VR is with their design at the moment, but 3d Labs has a programmable chip architechture almost direct x 9 complete available today.
 
The point is i'm not sayong that Ati or Nvidia aren't expensive, i'm trying to figure out how you can say that 3Dlabs or PVR are cheaper.

And i still disagree with you. Sure Nvidia or Ati would ask for some profits. But that's the case also for 3DLabs or PVR. Actually, i would say that it is more cost effective to go with Ati or Nvidia than any other firm.

They are already the top performer and most liquelly will stay in the next 2/3 years. They have the ingeniers tems to hadle such a program, they just need to custom a scheddule chip, they have good realation with foundries and the best prices.

At the countrary, PVR and 3DLAbs, don't have up to date top performer, are ATM 1 generation behind at least, doesn't have the teams to handle such a contract, need to build from crap a chip, need to negociate with foundries, etc.

Actually, it doesn't makes any sense to go with 3Dlabs or PVR, as it doesn't to go with bitboys ;)
 
Evildeus

What your missing, which I have already written twice in this thread, is that with PowerVR MS would get full control of any GPU created. Unlike Nvidia or ATI where they just have to buy finished chips from them at a fixed price. That can make it a lot cheaper for MS.

Also you don't know wether or not PowerVR have the team to do this. They did when they created the GPU for Dreamcast.. what has changed?

About them not having a top performer in the PC space. Well, again going back to DC, they didn't have a top performer in the PC space when they produced DC's graphics chip. So that's not important. All that's important is what tech they have now and its up to MS to evaluate that.

All anyone here is saying is that PowerVR, or a company like them, do offer certain advantages that MS may see as very attractive.

BTW, BitBoys are hardly comparable to PowerVR or 3DLabs. Try to remember that, in PowerVR and 3DLabs, your talking about two companies that have produced and released many 3d chips, successful ones at that. Vs a company that has never released a 3d chip (Bitboys). There is no comparison.
 
Well teasy, if i remember correctly, PVR was one of the top seller of 3D chip when sega released the dreamcast. They were effectively producing chips and selling a lot. That's no more the case.

What your missing, which I have already written twice in this thread, is that with PowerVR MS would get full control of any GPU created. Unlike Nvidia or ATI where they just have to buy finished chips from them at a fixed price. That can make it a lot cheaper for MS.
That's not true. When MS (or any other company) need something peculiar, then MS puts it in their specifications. It's up to the chip maker to fullfill the specifications. MS know what they want, that's why we don't have a NV20/25 in the X-Box but a modified chip. The extra-developpement is far less than making a chip from nothing.

Also you don't know wether or not PowerVR have the team to do this. They did when they created the GPU for Dreamcast.. what has changed?
They don't produce any more? ;) Or more precisely, much less.

BTW, BitBoys are hardly comparable to PowerVR or 3DLabs. Try to remember that, in PowerVR and 3DLabs, your talking about two companies that have produced and released many 3d chips, successful ones at that. Vs a company that has never released a 3d chip (Bitboys). There is no comparison.
So what? Does it means that with MS' money they can't do something? Yes, they haven't done anything till now! Of course, 3DLabs and PVR are producing some stuffs, but they are far from being close to Nvidia/Ati.
 
chap,

The xbox came out 1 year after the PlayStation 2...

The GS had very limited math capabilities ( no T&L and no Pixel Shading )... in theory you could have Pixel Shading and T&L on the Visualizer alone leaving ~1 TFLOPS for physics...

It woould not match Xbox 2's polycount, but even that configuration would be possible... the programmer can decide whether or not to do T&L on the Broadband Engine or off-load it to the Visualizer... software Cells can migrate ;)


And yes, we have seen games like Splashdown, GT3, etc... even Jason Rubin admitted the Xbox is 1.5-2.5x more powerful at the end, yet in physics and animation PlayStation 2 has done MORE than just being competitive and you know this too...

The EE's achilles' heel appears to be he integer side of things: while the MMI SIMD instructions are nice and powerful, the two ALUs, thanks to low clock speed, a lack of L2, small L1 ( data ) and insufficient ( IMHO ) Scratchpad RAM, are not very fast in scalar integer operation and chips like the Pentium III in the Xbox have the advantage here ( 733 MHz, Out Of Order Execution [R.O.B.], 128 KB of L2 Cache [Instructions+data], 16 KB of L1 Data Cache, 16 KB of L1 Instruction Cache, etc... )...

And while you would never see max efficiency, SSE bring the XCPU close to 3.2 GFLOPS... and the T&L is don in the GPU...

~1+ TFLOPS... I do not see an Intel or AMD processor even coming close to 400 GFLOPS in early 2005 ( it has to enter mass-production before launch )...

a 10 GHz Pentium 4 can do 4 FP ops/cycle... and that would put it at 40 GFLOPS...

Let's assume that Intel also adds FP MADD instructions and we can rate "SSE4" at a peak of 8 FP ops/cycle... that is 80 GFLOPS...

Now, let's double the number of "SSE3" units... we reach 160 GFLOPS...

We are talking about a 10 GHz Pentium IV/V ( Netburst architecture... or even the next IA-32 architecture ) processor with two full-fledged FP Vector Units with 4 FMACs each.

In order to achieve much more they would have had to invest in e-DRAM ( to give the processor enough bandwidth to be able to process this much incoming data and keep the execuition units fed... ), more FP units, etc... that would be tough to have coexisting with the old x86 baggage and woudl have required several years of R&D... it would be practically a brand new architecture...

Prescott should be released by Intel later this year and 2004 will be the year it will be pushed to the moon, with 2005 the year it will reach cheap Desktop PCs ( I said cheap ;) )... The next IA-32 architecture should be ready by 2005, but this is a single generational leap over the Pentium 4 architecture and it was built as a Server/Desktop/Mid-range workstations processor, not as a ultra-high performance embedded processor.

Integer wise the Intel offering might still beat the Cell processor in some scenarios, but it won't be a land-slide victory and I do not see the FP Throughput of that Intel offering will match the Broadband Engine's FP processing speed the same way the XCPU did to the EE ( efficiency aside... ).

Xbox 2 is coming out in the same year as PlayStation 3 and GCN 2, not 1 year later... it is already almost Summer 2003 and in less than 2 full years you do not have time to design a 0.5-1 TFLOPS class general purpose processor... MS has a lot of cash, but that is not infinite...

A giant like IBM could not have developed and produced something like Cell in two years ( it took 6-7 years for Intel to prepare the IPF architecture and compilers and tools and for them it was such a radical move that the adjustement prooved to be tough... they had experience in CISC processors, IPF was initially an HP concept... ), what makes you think Intel would do that for MS... ? Even if they started in 2002 ( Cell as an IBM project started quite a while before they started the joint venture with Sony ) the amount of money needed for such an R&D undertaking could not be easily kept hidden... also judging the fact both Intel and MS are publically traded companies...
 
That's not true. When MS (or any other company) need something peculiar, then MS puts it in their specifications. It's up to the chip maker to fullfill the specifications. MS know what they want, that's why we don't have a NV20/25 in the X-Box but a modified chip. The extra-developpement is far less than making a chip from nothing.

That is not the point... the chip would still be bought ( wether standard PC GPU or custom modified ) as MS has no control over chip manufacturing... MS would buy chips in large batches and each batch would come at a pre-negotiated price which would change when the amount of processors shipped to MS ( and put in Xbox 2's ) has reached a certain quota ( pre-negotiated too )...

Sony has much more control regarding the lowering of each PlayStation 2 console as as soon as new manufacturing technology is available they can implement it and reduce manufacturing costs...
 
Zidane, the extent of your imagination never ceases to amaze me :) (Although setting such a broad range of potential performance is a bit cowardly as far as predictions go.)
 
Evildeus said:
At the countrary, PVR and 3DLAbs, don't have up to date top performer, are ATM 1 generation behind at least, doesn't have the teams to handle such a contract, need to build from crap a chip, need to negociate with foundries, etc.

Actually, it doesn't makes any sense to go with 3Dlabs or PVR, as it doesn't to go with bitboys ;)

Actually both PVR and 3DLabs are successful and are ahead of NVIDIA & ATI in some fields. PVR low power cores are doing well (The deal with ARM is a big plus) and 3DLabs outperform NVIDIA in CAD/CAM apps (Find a NVIDIA card with 512Mb onboard ram and virtual texturing).

Both have had several years to build a console or PC core, will they out perform NVIDIA & ATI, who knows? But don't discount them out of hand.

Its sometimes worth looking outside consumer benchmarks... You may be surprised.
 
Well Deanoc. We can find some graphic company with great cards, but at what price? I look at professionnal market also, at least at <2000$ cards, and i think Nvidia is far ahead on that market and Ati is quite competitive!

Finally, i'm not saying that 3DLabs or PVR can't do it, i'm saying that it will be more expensive.

http://www.amazoninternational.com/html/benchmarks/graphicCards/quadroFX/quadroFX_2000_page1.asp

Panajev2001a

That's the difference between all house made and outsoursing. The contract is what is important in outsourcing, the terms and agreements :)
 
Has anyone investigated the possibility of using multi-core CPUs? Both AMD and Intel are planning to use this AFAIK. What are the possibilities for a console to have such a thing?

Speaking of which, how possible is this for a console CPU?:

Intel Looks to 1 Billion-Transistor Chip
By Ken Popovich

SAN JOSE, Calif.--Offering a possible peek at how Itanium processors may look in 2007, a senior Intel Corp. engineer on Tuesday unveiled the blueprint for a 1 billion-transistor processor.
Although he discounted the idea that he was making a product announcement, Intel Fellow John Crawford nevertheless appeared to strongly suggest that his blueprint of a four-core processor would likely be developed to power future computers.

"This is eminently doable, and you can expect things of this nature coming out," said Crawford, who helped design the Pentium and Itanium processors and holds the title of Intel fellow, the chip maker's highest-ranking technical position.

"Advances in technology are propelling us toward the era of the 1 billion-chip microprocessor," and the industry needs to figure out how to design and harness the power of such super chips, he said in a keynote address to several hundred industry engineers and developers at the annual Microprocessor Forum here.

Based on Moore's Law--an amazingly accurate prediction Intel co-founder Gordon Moore made 30 years ago that transistor density on chips would double every 18 to 24 months--the first 1 billion-transistor processor will arrive on the market in 2007, putting it in the not-too-distant future for high-tech designers.

While the challenges of designing a 1 billion-transistor processor are daunting, Crawford's presentation indicated that Intel, of Santa Clara, Calif., is already well on its way toward resolving many of the major issues.

Showing "how one might use a billion transistors in a server product," Crawford displayed a blueprint for a processor featuring four cores surrounding a shared 12MB to 16MB memory cache.

Intel Looks to 1 Billion-Transistor Chip (continued) Multiple cores help address one of the major obstacles to chip designs--heat. While engineers have found ways to reduce power consumption and the heat produced by individual transistors, efforts to boost performance has spurred chip makers to pack more and more transistors into their chips, generating large amounts of heat. By dividing the processor's core four ways, the heat is spread out over a wider area, allowing for cooling techniques to work more effectively.

IBM's Power4 processor featured in mid- to high-end servers features dual cores. While Intel has yet to announce plans to build multi-core chips, many industry leaders are, including Hewlett-Packard Co. and Sun Microsystems Inc., which will feature dual-core designs in future PA-RISC and UltraSparc processors for servers.

One of Intel's latest technologies, called hyperthreading, also is ideally suited for use with multi-core processors, Crawford said. Hyperthreading, which Intel first debuted in Xeon server processors this year, enables a single CPU to act like two virtual CPUs by splitting what normally would be one data stream into two. The technique makes fuller use of the processor, boosting overall performance up to 30 percent.

In a four-core processor, hyperthreading would speed data exchanges between the cores. Based on current designs, Crawford said, hyperthreading would boost the processor's performance about 25 percent.

One area the Intel executive didn't address was at what frequency such a super chip would operate. While the speed of Intel's PC processors have generally doubled every two years, with the company currently poised to release a 3GHz Pentium 4 soon, high-end server processors run at slower clock rates and place more emphasis on other design features to boost performance. Intel's fastest 64-bit processor, the Itanium 2, currently tops out at 1GHz.

Whatever its eventual frequency, the first 1 billion-transistor chip will offer performance far greater than anything available today, with Crawford adding that developers will have the opportunity to tap that power in new applications.

While there are still many obstacles that need to be overcome before such a processor becomes a reality, he said, such challenges keep his job interesting.

"We're living in exciting times in the semiconductor industry," he said.

http://www.eweek.com/article2/0,3959,636066,00.asp
 
DeathKnight said:
So let me get this straight. The Cell's some miracle processor that essentially has GPU features and full-blown CPU functionality all rolled into one package? Sounds eerily resemblent to the EE: multipurpose processor. Dedicated hardware will always prevail (more being done clock for clock) and I doubt the Cell's going to change that.

It's just humorous how you strut around here like the Cell's the second coming :LOL:

And I think your position is quite humorous. It's not a 'miracle' architecture, it's the logical extention of todays trends Your basically looking at a 2005 processor today and are making a fool of yourself. This is what happens when your designing IC's with gatecounts approaching 100M.

Your nomeclature is antiquated and ignorant of contemporary trends in design. Your refer to this mystical "Dedicated Hardware" as if it's this fixed and static form of architecture - which is so wrong.

DX7 style T&L necessitated a pretty "dedicated" block of logic to compute. So, according to your logic that dedicated "units" allways surpass more programmable ones (which is theoretical in nature and nothing more), then the GeFORCE256's "dedicated" TCL logic should outpreform a R300, where (IIRC) much is done in programmable Vertex Shaders and not in dedicated logic.

Do you really believe this?

It's merely logical: we're getting more programmable and the concept of "cell" is just a logical extention to todays trends as seen in the Nv3x and R3xx.

Sounds eerily resemblent to the EE: multipurpose processor.

Lets look at this. How much faster is the NV20's T&L front-end than the VU's? I mean, the entire EE weighs in at ~10M transistors, even at the extreme, each VU has 5M tranistsors. The NV20 has 65M+ in total, I don't know the fraction devoted to T&L, but...

You would burn an inordinate amount of silicon dealing with textures if you tried to do everything on a fully unified architecture (I seriously doubt DX10/unified-shading will have such an architecture).

From what I've heard, nV40 and R400 and beyond will combine their Vertex and Fragment Shading "units" (for lack of better word) into a more unified shading architecture. Which would be logical.
 
nonamer said:
Has anyone investigated the possibility of using multi-core CPUs? Both AMD and Intel are planning to use this AFAIK. What are the possibilities for a console to have such a thing?

Speaking of which, how possible is this for a console CPU?:

The XB-2 will probably end up with some type of multi-core x86 cpu I think. I thought I read a while back that Intel plans in 2004 to release a new IA-32 architechture to replace the current Pentium 4. If Intel went with a quad core and the costs are at a reasonalbe level around the time of a XB-2 console launch, Micorsoft could end up with a very powerful cpu.


Evildeus

My thinking leads me to conclude a contract with Microsoft makes a signifigant impact on 3d Labs or Power VR. These companies don't have the mainstream market penetration like nV and Ati. I really doubt either 3d Labs or Power VR is going to take a risk and try to go head to head nV and Ati anytime soon. A console contract though is something intresting to get involved in since they both already have the R&D in place. They will probably be more aggresive with their bids price wise.
 
fbg1 said:
glappkaeft said:
Funny when people don't bother to read their own links? Hint: They are software engineers.

I knew someone would make that comment. I read my link, and decided to post it anyway. I think the top OS kernel developers in the world from the top CPU design company in the world (yes, DEC) understand enough about microprocessors that they are capable of designing one. MS has all the human resources they need to make their own silicon. However, as I stated earlier, the problem they would face is designing a chip that is competitive with the Cell. That's a whole nother can of worms.

I have to strongly dissagre. There is basically no skill-transfer between software and CPU design. Building a state of the art CPU is a very specialiced skill and very few companies can pull it off. For instance VIA, Motorola, SUN, DEC/Compaq have not been able to compete and they are respected hardware companies. The chance that a team consisting of software enginners and manegers could pull this off are microscopic IMO.

Patrik
 
glappkaeft said:
I have to strongly dissagre. There is basically no skill-transfer between software and CPU design.

Are you referring to the logic part, the manufacturing part, or both?
 
Deepak said:
Whaever MS does.....it will have to have...

Backward compatibility, and

Assuming some Xbox games are coded at least partially in ASM, can Xbox 2 really have backwards compatibility without including the P3/XCPU on the mobo?
 
fbg1 said:
Deepak said:
Whaever MS does.....it will have to have...

Backward compatibility, and

Assuming some Xbox games are coded at least partially in ASM, can Xbox 2 really have backwards compatibility without including the P3/XCPU on the mobo?

Well, I don´t know much about this, but wouldn´t I Xbox2 have enough power to emulate Xbox 1?
 
Vince, you're not understanding the use of the word dedicated. You're also taking the word dedicated to mean static hardware.

By dedicated I mean it was designed for an EXACT purpose, ie. today's GPU's or a sound processor, etc. This is opposed to a general purpose processor which can do one thing, then it can do this thing, then it can do that thing, then it can mimmick this thing, ad nauseum.... Dedicated processors can be designed and trimmed to to do exactly what they're supposed to do in the quickest and most efficent way possible, which includes any programmability they may have. General purpose processors aren't designed to do everything under the sun in the quickest and most efficient way possible (or anywhere close to the dedicated processors we have today)... it'd be damn near impossible to accomplish that anytime soon.

I also don't see how it's an extension of today's trends. It's actually kind of ludicrous. There's always going to be dedicated chips to do a specific job (like handling the entire graphics pipeline) simply because they can be designed to get more done and more done faster than any general purpose processor of that same time period.

Although, the Cell could possibly be close to some "miracle" processor if they're crazy enough to put extremely fast, dedicated, programmable processors in there for nearly every job you could think of doing. I don't see that happening though, and definately not with the PS3.

I think you're either putting too much faith into the Cell or getting a bit too giddy about it. Wait until the final product is delivered (PS3) and if it's the fastest and most versatile of all the next-gen consoles then you can jump for joy ;) I'm not putting my money behind any next-gen console yet for being the most powerful and any other sane person shouldn't either.
 
Back
Top