From PC to Next-Gen Consoles: Largest Performance Gap...

Status
Not open for further replies.
PS2 rely on manual loop unrolling to hide latency
Oh that is so $%^&!@# disgusting.

Each time a PS3 discussion springs up, my personal *guess/picture* of the machine changes. It must have changed thrice in the past 3 months.

We just have no choice but to wait another 3/4 months for Sony to announce the 'visualization workstations' to get the first clear picture.

Or we can go kidnap some launch-title developer.
 
MfA said:
Zallman manages to cool high end P4s with heatpipes alone though.

Yeah, but...

zalman2.jpg

:D

jvd said:
I'm sure this chip will be used in many other things. Perhaps not a dual core or three core dual cpu set up . But single cores will most likely be made from it.

We're hearing 2 rumors about the final XeCPU, one talks about customized PPC97x cores and the other about customized Power5's... So, of course they will be use in others situations, since they already are. :D
The main customizations made to the XeCPU are about its "graphical" capabilities (Vertex-friendly :) ), which are quite useless for anything else but an embedded graphical device such as a console, IMO.
 
Vysez said:
The main customizations made to the XeCPU are about its "graphical" capabilities (Vertex-friendly :) ), which are quite useless for anything else but an embedded graphical device such as a console, IMO.

What about 'security'-related changes to prevent 'unsuitable' code execution? ;)
 
passerby said:
PS2 rely on manual loop unrolling to hide latency
Oh that is so $%^&!@# disgusting.
Ops. on further thought I was probably shooting my mouth off with that one. :oops: It is too much to ask of compilers to automatically unroll every unroll-able loop. icc allows setting an unroll pragma in the code, but I've never tested it - fortunate that I never met a problem that needed me to do manual unrolling. :p
 
passerby said:
passerby said:
PS2 rely on manual loop unrolling to hide latency
Oh that is so $%^&!@# disgusting.
Ops. on further thought I was probably shooting my mouth off with that one. :oops: It is too much to ask of compilers to automatically unroll every unroll-able loop. icc allows setting an unroll pragma in the code, but I've never tested it - fortunate that I never met a problem that needed me to do manual unrolling. :p

The issue is more you need to be executing N verts in paallel to get close to the theoretical performance, which involves a lot of register juggling, special entry and exit code to get into a good state when you terminate or have a problem you can't solve in the body of the loop.

VCL does an OK job of this (minus the bugs where it just generates bad code) and that makes life a lot easier, it also unfortunately tends to generate very large code, which can lead to having to swap uCode, which can be expensive.

The VU's are something I happen to like. Even at that I question the value of VU0 on the die, sure it can be used, but for the majority of games it just isn't worth doing anymore than using it in macro mode.

The weakest part of the PS2 is it's main CPU, it has pretty much no redeaming features (at least the GS has good fillrate), getting to the point where something else is the bottleneck takes real work or very complex vertex calculations.

To me at least this generation has demonstrated just how far apart platforms can be in featureset and performance and still basically have comparable graphics.

Cross platform development is an unavoidable reality, and in a lot of ways it's the great equalizer. Content is easilly the most expensive part of development and for the most part developers just aren't going to generate stuff that they cannot share between versions.

Next time it could well be even closer, assuming Sony doesn't foget to support mipmapping and basic blend modes it's going to be very close.
 
Which will win? Ask in five years time.


Actually we might be able ask in one to two years time, at least in terms of performance, depending on the scenario that takes place. Depending on process used we'll have a clue as to what is the most powerful h/w. If they're both close process-wise, you're right it could take five years or more. But if there is great process disparity( 90nm vs 45nm or 110nm vs 45nm) in gpu or cpu, it would be telling, the one using the more advanced process in such, provided no serious handicap to such process, would have to be ridiculously underperforming/bad design to go below the performance of the other.

This all depends, on what ms does = TSMC or ibm, but they're cutting costs/corners so who knows, and sony-toshiba(+ibm) achieve on each side. If 45nm which is poised for 2005 for sony-toshiba goes ahead of or maybe even on schedule and is economically feasible it will be used, likewise tsmc or ibm could achieve similar or worse. Ibm might also offer differing prices, and a last minute change to a worst tech could always be possible for either as cost-cutting measure.

If worse comes to be(for MS of course), and MS cuts too many corners, and sony-toshiba is lucky, however unlikely that may be... 110nm gpu with 65nm(or god forbid a change to 90nm) cpu with cost efficient designs vs 45nm/nigh 32nm gpu with 45nm/nigh 32nm cpu pushed more than normal due to benefits(owning their own fabs, x-arch, e-dram capacitor-less process tech, and the like... we saw with ps2 how sony could push a process to remain somewhat competitive with one two generations ahead .25micron vs .15micron... and back then they were almost way way behind in terms of fab tech, compared to now... )... I think the real winner would be most likely you know who if such turned out to be the case....

Something similar could happen with sony-toshiba being on the losing side(say cell sucks, or their fabs are fubar), but that's a scenario for others to develop ;)
 
DeanoC said:
one said:
jvd said:
Today PS2 is seen as less powerful machine than newer Xbox, but if PS2 had had more memory, and if Toshiba had not misestimated future GPU trend in designing GS...?
Lets have a look at this in 'reality'. Lets look at something 'compareble'

PS2 has 2 general purpose vector units, clocked at 300Hz. Capable of a limited set of 16 bit integer ops and 1 vector dotproduct per cycle. It has looping and primitive generation capabilities.
Xbox has 2 vertex units, clocked at 233Hz. Capable of 1 vector dotproduct per cycle.

Clear win to PS2, lets look closer. Clear high theorical speeds for PS2.

Xbox vertex shader are 6 way SMT. It has 6 threads, that switch automatically when latency would stall the pipes, it also does the homogenous divide and clipping for 'free' (pipeline fixed function).
PS2 rely on manual loop unrolling to hide latency, in many cases the register don't have interlocks so its down to NOP counting. Its LIW make it hard.
Clear win to Xbox this round. Xbox is much easier to program for.

Overall realworld performance, Xbox is faster but then it should be its several years later silicon. PS2 manages to hold it head up VERY well considering how much earlier it was designed and implemented. Its a testiment to the awesome design skills of ST (Sony and Toshiba) that they managed to put so much pure float power in such an early console.

But it also shows a brute-force un-user friendly approach. A good analogy would be modern jets, the early experimental X planes did get to Mach 1 or 2 but modern jets get to a similar speed alot safer.

ATI and NVIDIA have shown how to get good graphics performance relatively easily. Sony have shown pure brute force and good coders can get good graphics performance.

Deano, PSOne seemed to be a user freindly, powerful conssole.

The idea with PlayStation 2 was to let developers close to the metal and give them the fastest architecfture they could design at that time.

To do that they needed a very brute force approach.

I think they understood now that maybe it is the mix between the two approaches that they need as MS got its cards right with Xbox 2.

Personally an idea for the EE would have been to take VU0, strip it of the lower integer pipeline (including the branch unit and all) and attach it permanently to the CO-OP pipe making it a co-processor for the EE RISC core (yes, the VIF0 would go away too).

Then attach to the RISC core a nice 128 KB L2 cache from which the two ALU units, the FPU and VU0 could be fed from (if the data they wanted was not in the L1 cache or in the SPRAM).

Would have this yeld higher max FP performance for VU0 ? No.

Would have this make the VU0 more useable by developers ? IMHO Yes.

Macro-mode as it is now slows the CPU down as it kicks where it hirts the most: lack of good cache hierarchy for the RISC core.

With 128 KB of L2 cache the situation would be different.

Now, look at PSP.

;).

They do learn from their mistakes: true, we do not see a big L2 cache... but...

We do not have VU1 to steal main memory accesses, the CPU is only a single-banger (single-issue) processor and quite probably the VFPU is an extension of the regular FPU like we see on the SH-4 CPU.

PSP manages in a portable to offer PlayStation 2 like performance with an easier to program platform (more user friendly).

That is the advantage of newer technology (nice 90 nm manufacturing process) and hindsight, experience.

PSP was designed to be programmed with a higher level API in mind, so was PSOne.

I think PlayStation 3 will try, as much as they can, to go towards that direction.

I do not think they will go crazy with the number of APUs and PEs for that reason.
 
JVD, I think you are bit biased toward ATI, just like Sony fans are toward Sony. You have your reasons for that and there's nothing wrong really. But I think that what you wrote is contradictory.

RIght and last week i was a video lodgic and sega fan. Great . Wonder what i will be next week

Ok, but that is in theory. In reality, Sega, Nintendo and Microsoft had their asses soared badly. Even from a technical POV, PS2 still stands well against much newer hardware like Xbox, even if it is undeniably inferior (how could it be otherwise?). And thinking of the magnitude of the the companies we are talking about, I highly doubt that Sony obtained such results because "they are just lucky". Moreover when PS and Saturn both launched simultaneously, it was the-once-mighty Sega that looked the dwarf of the situation, imho.
Your mising the point. I was replying to the oh if the ps2 had more ram more this more that. Pointing out that I can say that about any hardware. IF the saturn was clocked higher and had more ram it would have been better than the psone.

I pointed out why sony had such success. Sega launched the dc a year befor sony. Was breaking even per system at 200$ sony was loosing at least 150$ per system a year later selling at a 300$ price point. Do you argue that in 1999 with a 450$ price point the dreamcast wouldn't have been at least on par with the ps2 ? Considering how well its games still stand up to ps2 games i would at least be able to and with a dual neon 250 chipset with 125mhz on each core it would have had more than double the performance. Add in a elan and it would have done 10 million sustianed polygons with 4 lights and i would wager still cost less than a ps2.

What?
You make fun of his assumptions when a paragraph above you made even bolder ones for Sega? You said they would have kicked Sony in the butt with a super Dreamcast had they waited a year...hence Sony are 'just lucky'. The coherence here sure is striking
you just don't see that i'm still going on with his claim. Pointing out how dumb it is b ecause u can say it about any part .

Btw Sony was lucky things played out that way. Just as sega was lucky with the way things played out during the 16 bit era and nintendo was lucky with the 8bit era when people thought video games were dead. Thats what its all about . Hell sony was even lucky in the 32bit gen. Sega didn't see the move to 3d till it was to late , nintendo was just to late to the market. Change either of those to things and sony may have not come out on top.

You doubt of PS3 VPU. And you may be right. We know nothing about it. It could be great, it could be crap
Right now its most likely cell based. Deano seems to be saying that along with alot of other people.



Then again you hype Ati VPU by saying that "its most likely that ms got the better vpu
I am not hyping anything. I am stating. When you go to a company that makes a certian product for a living and constantly puts out that product you are going to get a top of the line vpu. Not to mention that this company is the leader of the pc market vpus coupled with thier ip and its a good bet they will have the better vpu.

if we don't know anything about Sony's VPU, how can you go that far and then criticise Sony's fanboys
It depends on who you mean when u say we .



So, you basically are annoyed by Cell hype simply because you are hyped by Ati
I still don't see where you are getting hype from.

All i am doing is stating it will be a better chip and will put the r420 to shame. I am not saying it will be the second coming and its tnl unit will be able to transform and light 1 billion polygons per second and it will have a fillrate of 1billion pixels and can do ray tracing in real time.
That would be hyping.

That is so true and savvy. That's why all this speculation is like a powder keg. It's about to blow up to the face of the loser party.

Yup. Though we try to do our best to discuss things.

My two cents: In Japan alone Sony can sell a million or more of PS3s just on day one. There's no other consumer electronic device that can move such massive numbers so fast.
japan isn't the important market. ITs the usa that is important as its a much bigger market.

SEGA had NAOMI 2 which is essentially a DC 1.5 that they didn't convert to console form. It's more powerful than PS2 in many ways.

Right and with out the ram to store the game in (they would have used dc discs ) it would have been very cheap to make. I would say most likely around the cost of 2 dreamcasts . Mabye just a little more. which would have been the same cost as the ps2.

In the arcades.
and it was cheap to make .

The main customizations made to the XeCPU are about its "graphical" capabilities (Vertex-friendly ), which are quite useless for anything else but an embedded graphical device such as a console, IMO.

in your opinion. They will find uses for them.
 
overclocked said:
65nm is what the PS3 will be made in, correct!?
2 years till the PS3 then, it´s not so much.

They're aiming for 45nm CMOS, the first models might, of course, use a older tech 65nm (or worse 90nm) + heavy cooler.
Still, Cell is/was being created with 45nm in mind.

jvd said:
in your opinion. They will find uses for them.

For PPC97x and Power5, yes, for the "customized cores", no. If you have an example, feel free to expose it, though.
 
nAo said:
Vysez said:
Still, Cell is/was being created with 45nm in mind.
Source?

Vince and panajev. :LOL:

More seriously, i forgot the IMO at the end of the sentence.
Still, we have plenty evidences that they're aiming to 45nm, since they already boosting the research for that process. (They already have working samples, IRC?)
And the (presumed) sizes of the CPU + edram and GPU + edram (let's call it the dream setup), shows that a smaller process is needed if they don't want their costs per chip to skyrocket.
And if you add the various interviews (old interviews!) about Cell (especially the one interviewing the guy in charge of the OS) talking about a 2006 launch...
A Cell created with 45nm in mind is not "that" crazy.

However, it's pure speculation (At least, on my side).

edit: typos
 
They already have working samples, IRC?)
if they do its 90nm mabye .... mabye 65nm

A Cell created with 45nm in mind is not "that" crazy...
well the plants should be doing 65nm by the end of 2005 . I doubt they will have full scale production by 2006 on 45nm .

In a perfect world mabye. every other company has been having trouble with processes since 130nm . Even ibm had trouble with 90nm . I highly doubt they will get a free pass on 65 nm
 
jvd said:
if they do its 90nm mabye .... mabye 65nm

  • EETimes. 6.21.2004 said:
    TOKYO — Sony Corp.'s Nagasaki 300-mm fab has begun test production of Cell processors, Ken Kutaragi, Sony's executive deputy president and COO, acknowledged this week. He declined to elaborate...

    Sony and its game subsidiary has thus far invested ¥115 billion (about $1 billion) in the 300-mm fab to establish a 65-nm process. The fab will serve as Sony's base for manufacturing Cell processors and other devices fabricated with the 65-nm process.

Jvd said:
Well the plants should be doing 65nm by the end of 2005 . I doubt they will have full scale production by 2006 on 45nm .

Nagasaki should be in production during 1H2005, slated production is 15,000 wafers/month. OTSS/Oita is slated to begin in late 2H2004 eventually reaching full production at 12,500 wafers/month. The line at E.Fishkill is slated to come online in 1H2005 with unknown production capacity.

Sony has been shipping low-K 90nm EE+GS's and Toshiba has been shipping a 90nm SoC since last fall, they're quite a bit ahead of the PC 3D industry. Almost a year in fact.

Concerning 45nm:

  • Nanoinvestor said:
    Tokyo--Sony Corporation [profile] and Toshiba Corporation [profile] today announced that they would collaborate in the development of highly advanced 45-nanometer (nm) process and design technologies for next-generation system LSI. Under the terms of an agreement, the two companies will take their successful development of 65nm process technologies to the next level, with positive results expected in 2005.

    Sony and Toshiba have worked together to pioneer IC process technology since May 2001, in a collaboration that has resulted in co-development of cutting-edge 65nm design process that will soon be applied to the sample products. The companies have decided to build on this achievement and to apply the design know-how and cutting-edge technologies gained from developing the 65nm process to next generation 45nm process technology.

    Sony and Toshiba signed the joint development agreement in Tokyo and it calls for completion of the project by late 2005, with the ultimate goal of being first to market with 45nm know-how. The project will have a budget of 20-billion yen, to be shared by both companies, and approximately 150 engineers from the two companies are expected to work on the project at Toshiba's Advanced Microelectronics Center in Yokohama, Japan and Oita Operations in Kyushu island of Japan.

PS. I never stated it was a 45nm design, I've been consistent on it being 65nm since 2001 when most people stated it was to be 100nm.
 
Here's a few links:

Link

Toshiba Makes Major Advances Toward 45 Nanometer Process System

LSI Develops World's First System LSI technology for 45nm Generation

TOKYO -Toshiba Corporation unveiled a high performance metal-oxide semiconductor field-effect transistor (MOSFET) and advanced multi-layer wiring technology, both elemental technologies for 45-nanometer (nm) system LSI process technology two generations in advance of today's 90nm process technology.

Next-generation broadband digital consumer electronics will rely on high-performance LSI, particularly System-On-Chip (SoC) devices with extremely high levels of integration, to process huge volumes of data in real time. However, achieving this requires advances in finer process technologies and overcoming the twin hurdles of improving performance while reducing power consumption. Both rely on reduction of power supply voltage, which also requires a thinner transistor gate oxide film.
But thinner film is more susceptible to current leakage, which degrades performance.

Toshiba found a solution in a new gate-oxide technology and an optimized gate oxide film and has applied this to a high performance MOSFET with lower current leakage. The company has also succeeded in developing new multi-layer wiring technology essential for the dense wiring of highly integrated SoC. This new technology optimizes wiring parameters in terms of operating frequency and power consumption suited to 45nm generation products.

These new elemental technologies will support Toshiba in advancing development of 45nm generation system LSI.

http://www.toshiba.co.jp/about/press/2004_02/pr1201.htm

Tokyo -- Sony Corporation and Toshiba Corporation today announced that they would collaborate in the development of highly advanced 45-nanometer (nm) process and design technologies for next-generation system LSI. Under the terms of an agreement, the two companies will take their successful development of 65nm process technologies to the next level, with positive results expected in 2005.

Sony and Toshiba have worked together to pioneer IC process technology since May 2001, in a collaboration that has resulted in co-development of cutting-edge 65nm design process that will soon be applied to the sample products. The companies have decided to build on this achievement and to apply the design know-how and cutting-edge technologies gained from developing the 65nm process to next generation 45nm process technology.

Sony and Toshiba signed the joint development agreement in Tokyo and it calls for completion of the project by late 2005, with the ultimate goal of being first to market with 45nm know-how. The project will have a budget of 20-billion yen, to be shared by both companies, and approximately 150 engineers from the two companies are expected to work on the project at Toshiba’s Advanced Microelectronics Center in Yokohama, Japan and Oita Operations in Kyushu island of Japan.

Continued advances in digitization fuel demand for the ability to access, process, save and enjoy increasingly rich data sources. That in turn is driving demand for system LSI that combines increased miniaturization with enhanced functionality, faster operating speeds and lower power consumption. Sony and Toshiba will position themselves in the vanguard of meeting these demands through the industry-leading development and deployment of 45nm design technologies.

Of course, it can fall short, just like the 90nm process for the PSX. Still, they're already working on it.
However, only time will tell us if their work will come to fruition in time or not. :D

edit:
Vince said:
PS. I never stated it was a 45nm design, I've been consistent on it being 65nm since 2001 when most people stated it was to be 100nm.

:oops: I was kidding, i never meant to put words in people mouth.
It was my "guesstimation".

BTW i didn't imply it was "designed" for 45nm, i wanted to say that it was created with a fast shift to 45nm in mind.
 
well i hate to be an asshole. But they never state it was a 65nm chip.

Only that they becan test production of cell processors at the 300mm fab. and later they are invested 1 billion in the 300mm fab to establish a 65nm process. No mention of if its happened yet . :D
 
MfA said:
Jaws said:
or the truly revolutionary part, i.e. the software and the compiler for this hardware...

Haha, that is a revolutionary amount of wishfull thinking.

I would be surprised if they have conceived of anything revolutionary. No revolutionary programming model for the local parallel programming, let alone for the distributed case.
.....

Well, a revolution wouldn't be a revolution without doubters! Afterall, we'd all see it coming ;) ...The word revolution is abused, how about not a revolution but more than an evolution! ...but the thing is half a dozen Suzoki Cell patents always mention a new programming model? :?

SUMMARY OF THE INVENTION

[0010] In one aspect, the present invention provides a new architecture for computers, computing devices and computer networks. In another aspect, the present invention provides a new programming model for these computers, computing devices and computer networks

Suzuoki cell patents

-----

zidane1strife said:
.....
I have heard that a prototype was tested, IIRC, and sustained performance obtained out of it was excellent, which made them give the go-ahead to continue with the project. After that, they've continued to say how they'd deliver what they'd promised, with due reason.
....

So a project of this magnitude and one that looks so risky has been approved by the heads of SONY, Toshiba and IBM...and people are still doubting the obvious?. I could see this high powered senior meeting;

SONY: "The devs hated PS2 architecture, it's a bitch to develop for. We could've given them better tools...hmm. or a simpler architecture. Anyway it'll all change for PS3!"

Toshiba: "Well, your not thinking about that Cell, schmell r u, that's a bit OTT, isn't it?"

IBM: "What's this Cell, tell me about it, we have experience in many advanced techs?"

/...later

IBM: " ...what!...how many procesors!!!...and that's better? ...so that's our plan?"

Toshiba: "...some shits gonna hit the fan if this doesn't work!"

Sony: " ...yep, a fortune to invest and the reputations of Sony, Toshiba and IBM and our futures are on the line if it doesn't work!"

IBM: "Well, lets create a team that makes it happen..."

/...later, a dedicated design and R&D centre in Austin, Texas was built and 300 of the worlds leading engineers gathered...STI finishes a powerpoint presentation of their plans for CELL...

STI: " So wad ya tink...any questions?"

Engineer1: " err...wtf?"

Engineer2: " err...that's bulshit!"

Enginner3: "...dudes...wtf have you been smoking!, that's gonna be crap at single threaded general purpose computing and what about the GPU???"

Enginner4: " LMFAO...and that distributed parrallel crap is gonna work in high latency broadband environments!,...yeah right!"

Enginner4: "...has anyone considered how the fuck your gonna program that shit, never mind manufacture it???"

STI: "We didn't gather 300 of the worlds top engineers in this room to tell us the obvious! SHUT UP and get on with it!!!"

-----
nAo said:
what if PS3 theoretical performance will be very far from the numbers we heard so far (1 teraflop/s)?
That's what I was told from a couple of sources now..

You've just ruined the perfect myth alongside Santa Clause! :( ...Btw, is that north or south of 1 TFLOP? :p

----
Fafalada said:
...
And to make matters worse, we all know that Xenon leak is just a ruse, those spec are far too underpowered - real thing will be at least 2-3x

Were't the rumoured Xe specs confirmed by devs to be close? If it's a ruse, it can work both ways aswell? ;)

----
DeanoC said:
....
Cell seems to have 1 thread per APU. How exactly are you going to hide the data dependecies that a graphics pipeline has?
Unless Cell is very different from the model currently thrown about, it would make a terrible GPU... Or a bloody hard one to program (manually hiding latency on the scale of Ghz processor is :devilish:)

I don't think we can even think about threads in the normal sense with Cell and their Apulets, am I the only one who thinks that out of 6 Suzoki Cell patents, describing in detail a new computer architecture, new processor ICs, a new programming model etc. etc. that not once, never is there a mention of the word thread , is it just me or is that a litle bit strange ? :?

Anyway, here's an IBM patent / B3D thread describing a distributed cache bus/ cache system spanning across all the PUs in the CPU/ GPU to relieve / hide data dependencies, latencies and bus bottle necks...

----
passerby said:
.....
Obviously we can't expect the amount of resources building a GSCube to be channeled to each PS3. (16 GS with 32EDRAM each :oops: ) But it does demonstrate that there is probably some way that such a design can work. Anyone had experience with the thing?

I dont...but heres' a link to the next best thing.... It is Square USA, describing their experince in programming the GScube for the Siggraph demos in 2000-2001. An interesting read... They describe using Lisp / Scheme in programming the GSCube , an indication for things to come in PS3? Here's an extract,

The cost of having full-featured scripting language inside the real-time rendering engine becomes smaller nowadays. We expect it to be the mainstream. With clever GC and supporting tools, Scheme can be a good choice for it.


We also believe the fusion of authoring and playback environment, not only in the consumer game production but also in the high-end computer graphics. Such applications will be required to deal with complex structure, spread out among processors and changing dynamically at run time. Lisp is still a good language for it.

----
OryoN said:
Are there any images(or, wishfully, video) of any GSCube demos from that event?

I wish there were, I can't find squat didley shit! :p

---
Panajev2001a said:
.....
You know what though ?

F*ck it, royally f*ck it... I heard this s*&t for the past 10+ years regarding Sony's effort, regarding Sony as a company, regardy SCE and the PlayStation business.

:oops: :oops: :oops: For what it's worth, when it finally comes out, I think you'll be just as happy if not more with the PS3 architecture/ SDK as Deano seems to be with the Xenon's SDK! :p

---
DeanoC said:
.....
I like both, Cell looks to be an awesome CPU, XeGPU looks to be an awesome stream processor. Can I have both in one machine please? :)

Cell is designed from the ground up to be a stream processor / architecture... If the Xe GPU is a stream processor, then both PS3s CPU and GPU are stream processors. So if you want both in one machine, then that's a PS3! :p

From our recent discussion about CPU=GPU, STI haven't made that distiction between a stream processor, both CPU and GPU! :p
 
jvd said:
well i hate to be an asshole. But they never state it was a 65nm chip.

Only that they becan test production of cell processors at the 300mm fab. and later they are invested 1 billion in the 300mm fab to establish a 65nm process. No mention of if its happened yet . :D

Straws. Grasp. Must. ;)

A preceeding article:

  • The Register. December 2003 said:
    Trial runs of the advanced semiconductor manufacturing process which will eventually create the much-vaunted Cell microprocessor are set to start at Toshiba's fabrication plant in March next year.

    According to a statement from Sony and Toshiba issued today, work on the 65nm chip production technology - which is more advanced than any system in commercial use today, as most companies are still coming to grips with the switch from 130nm down to 90nm processes - is proceeding to plan.

    The nm (nanometre) measurement is important, because the smaller the size of the components that are used on a chip surface, the faster and less power-hungry the devices can be - a key consideration in the creation of Cell, which has been described as a "supercomputer on a chip".

    Toshiba is expected to produce the first sample chips from its 65nm line in March and will ship them to customers for evaluation, but full production of the technology is not expected to ramp up until summer 2005 - just about in time to supply components for Sony's PlayStation 3 launch at the end of 2005, if that is indeed the plan.

    The sample production will take place at Toshiba's Yokohama fabrication plant, but the commercial 65nm line will be in a new factory, which is currently being built in Oita prefecture and is expected to begin producing chips at 90nm in mid-2004 before being upgraded to the 65nm process further down the line.

    Sony is also building its own fabrication plant for the 65nm Cell chips, with an investment estimated to be in the region of €5 billion being made in new plant in Nagasaki prefecture.

    Arnnet.com 7.13.2004 said:
    Toshiba is slightly further along in the development of the technology and is currently evaluating early 65-nanometer samples, said Junichi Nagaki, a company spokesman. Toshiba is using a pilot line at a facility in Yokohama to produce the chips.

    One of the first uses for Toshiba's technology will be the production of the Cell processor, which will be used in Sony Computer Entertainment's (SCEI) upcoming PlayStation 3 games console and future consumer electronics products from other companies. In this area Toshiba is working with SCEI, Sony and IBM on technology development. Mass production is scheduled for the first half of 2005, said Nagaki.
 
Jaws said:
OryoN said:
Are there any images(or, wishfully, video) of any GSCube demos from that event?

I wish there were, I can't find squat didley shit! :p

From the paper you provided:
It was used for technical demonstration titled ``Final Fantasy in Real Time'' at SIGGRAPH 2000 and 2001 exhibitions, showing a scene from the movie ``Final Fantasy: The Spirits Within'' rendered in real time, on SCE's GSCube parallel rendering engine (in 2000) and nVidia's Quadro DCC graphics card (in 2001).

So, it might be the same, more or less, demo. (A NV20 is quite different from a GSCube 16, though :? )
 
passerby said:
Ops. on further thought I was probably shooting my mouth off with that one. Embarassed It is too much to ask of compilers to automatically unroll every unroll-able loop.
It's not too much to ask - that's what compilers for VU all do (well, VCL compiles assembler code so you have direct control over which loops it will unroll and which not, but it's the same thing).
Deano was referring to what hardware does for you.

The situation was bad back when PS2 started out, when there was no compilers for VU at all - so yes, we did all the code optimization by hand back then.

Or we can go kidnap some launch-title developer.
That wouldn't get you any useable info right now either. You're out of luck short of kidnapping SCEI engineers involved in the project :p


Vysez said:
We're hearing 2 rumors about the final XeCPU, one talks about customized PPC97x cores and the other about customized Power5's... So, of course they will be use in others situations, since they already are.
From what I understand both of these rumours just sound silly, in more ways then one. Unless the "customized" part refers to some significant redesigns of the core, such as completely altering the number of execution units etc.
Then again, I'm also not one that expects PEs to be PPC97x derivates either.

jaws said:
If it's a ruse, it can work both ways aswell?
I thought all the sarcasm I put into that post was self explanatory. But anyway...

ERP said:
The VU's are something I happen to like. Even at that I question the value of VU0 on the die, sure it can be used, but for the majority of games it just isn't worth doing anymore than using it in macro mode.
True - but I still say that's a VU0 design oversight, a processor that is meant to be used in standalone mode needs standalone output capability.

Panajev said:
Macro-mode as it is now slows the CPU down as it kicks where it hirts the most: lack of good cache hierarchy for the RISC core.
Uhmm, how do you gather that? :?
 
Status
Not open for further replies.
Back
Top