Official PS3 Thread

Status
Not open for further replies.
well i still feel with all the news flying both ways that we can't dismiss the cell chip being used in the ps3 or it not being used in the ps3. We can only wait for more info .
 
chaphack said:
Since it is all speculation at the moment, what IF Kutaragi said was true?

"We wanted to use CELL for PS3 CPU, but the technology will only be cost efficient in 2007"

So what will be in PS3? GSCube128 with tons ol mini EE/GS latched together?

Sony showed pictures of shrinking EE/GS.....

Will that disappoint many?

What you have quoted doesn't have anything useful on specifics of PS3, the pictures are only for PS2 and the later way of cost cutting by shrinking and integrating.

As you have said you do not believe anything from Sony again, why will it disappoint you if you do not have any expectation on it ?

BTW, the PS2 hadn't disappointed me since its launch, but X-BOX has disappointed me the most as most of the games are just not fun or ugly, only a handful of games are worth buying (I have bought the ones that I think worth already).

And ooh, I also have Halo and I think it is simply ugly (not as ugly as Brute Force), I will take Metroid Prime over it anyday.
 
the pictures are only for PS2 and the later way of cost cutting by shrinking and integrating.

That what im thinking, producing tiny PS2 chips in the new 65nm plant? Maybe that IS the case, a GSCube128/256?

why will it disappoint you if you do not have any expectation on it ?

chap: "Not going to get disappointed anymore if 1000X doesnt come true, used to Sony talkalot talk. Will that disappoint many?"

Read.

most of the games are just not fun or ugly I also have Halo and I think it is simply ugly

MAMAMIA! KAWAGIRASIA! :oops: :oops:
TEll me you said that just to spite me? :cry:
 
chaphack said:
the pictures are only for PS2 and the later way of cost cutting by shrinking and integrating.

That what im thinking, producing tiny PS2 chips in the new 65nm plant? Maybe that IS the case, a GSCube128/256?

why will it disappoint you if you do not have any expectation on it ?

chap: "Not going to get disappointed anymore if 1000X doesnt come true, used to Sony talkalot talk. Will that disappoint many?"

Read.

most of the games are just not fun or ugly I also have Halo and I think it is simply ugly

MAMAMIA! KAWAGIRASIA! :oops: :oops:
TEll me you said that just to spite me? :cry:

I dunno if a faster clocked ps2 would help. I also don't know how happy programers would be if sony made a chip with like 8 gs on it and 8 ees on it . I doubt that be fun to program for. But i guess that can just do that. Mabye even pump the ee and gs to 2ghz each. Wonder how that would work for a new system.
 
chaphack said:
why will it disappoint you if you do not have any expectation on it ?

chap: "Not going to get disappointed anymore if 1000X doesnt come true, used to Sony talkalot talk. Will that disappoint many?"

Read.

No one can have a definite measure on how many X is one machine over another, all are claims and interpreted in the way people want, no matter pro or con.

Everyone knows marketing numbers are like that, no specifics, only a vague number.

chaphack said:
most of the games are just not fun or ugly I also have Halo and I think it is simply ugly

MAMAMIA! KAWAGIRASIA! :oops: :oops:
TEll me you said that just to spite me? :cry:

I wouldn't say anything just to spite anyone, I do think Halo is ugly and I have only played it an hour of so and left it in its own box since then, I bought it because many people are claiming its superiority and beautiful graphics which I didn't find to be the case. But I hadn't sold it like the other games as I tend to keep one or two famous titles for a console.
 
I think that the GS code might be able to be executed by the Cell based Visualizer ( sort of like the PSX GPU was not embedded in the PlayStation 2 I/O CPU and its code ran on the GS )...

In this case they would only need to integrate the EE and the SPU2 ( personally I'd use the EE + SPU2 as I/O CPU and Sound Processors of PlayStation 3... the EE has enough RAW power to be used quite effectively as both an I/O CPU and as a nice Sound DSP ;) )...

A 65 nm die-shrink of the EE would be very very tiny... the current EE+GS chip realized in 90 nm is approximately 86 mm^2... so EE + SPU2 ( with Sound RAM embedded ) = ~38-40 mm^2 using 65 nm technology ? I do not know what is the size of the SPU2 + Sound RAM ( which is now embedded in the SPU2 ) chip...
 
Panajev2001a said:
I think that the GS code might be able to be executed by the Cell based Visualizer ( sort of like the PSX GPU was not embedded in the PlayStation 2 I/O CPU and its code ran on the GS )...

In this case they would only need to integrate the EE and the SPU2 ( personally I'd use the EE + SPU2 as I/O CPU and Sound Processors of PlayStation 3... the EE has enough RAW power to be used quite effectively as both an I/O CPU and as a nice Sound DSP ;) )...

A 65 nm die-shrink of the EE would be very very tiny... the current EE+GS chip realized in 90 nm is approximately 86 mm^2... so EE + SPU2 ( with Sound RAM embedded ) = ~38-40 mm^2 using 65 nm technology ? I do not know what is the size of the SPU2 + Sound RAM ( which is now embedded in the SPU2 ) chip...

what about psone ? or will it get dropped ?
 
I think the current I/O CPU of the PlayStation 2 will be dropped unless there is critical code that uses the I/O CPU that cannot be run in software...

We have two scenarios:

PSX emulation: done in software using advanced version of the Connectix software they bought and all their experience with the PSX ( Sony has much more info regarding the PSX HW and coding experience with it and its tools than people that make emulators for PC like the Bleem! guys ) and running on the Broadband Engine.

PlayStation 2 backward-compatibility: here the only thing we would have to run by software is the I/O CPU which is basically a PSOne on a chip minus the GPU and the RAM ( which is off-chip )...

If we have an efficient and accurate PSX CPU emulation code that we use for software PSX backward-compatibility then we can re-use it for PlayStation 2 backward-compatibility...

The size of a 65 nm PlayStation 2's I/O CPU would be really small so if the PlayStation 2 titles support is not again in the 98-99% range as it was for PSX games running on PlayStation 2 ( if only 70-75% of PlayStation 2 games ran on PlayStation 3 it would be a huge problem and Sony is aware of this ) you should be able to add that unit in the PlayStation 3's I/O ASIC... EE+SPU2 ( and embedded SPU2 RAM )+ PlayStation 2's I/O CPU ( we could use the Yellowstone DRAM if we can work out the timing to emulate the I/O CPU DRAM using Yellowstone DRAM or else we would have to put it the I/O RAM off-chip or we would have to embed it )...

The I/O ASIC should be a separate chip and would not be embedded in the Broadband Engine or the Visualizer...

There is no need to include the PC800 RDRAM of the PlayStation 2 as Yellowstone DRAM can be easily used to provide the same 800 MHz signalling rate... you would just pass in this PlayStation 2 compatibility mode a 100 MHz clock instead of the regular 400 MHz clock to the Yellowstone DRAM PLLs and they do the magic themselves...

]The PLLs would do this: 100 MHz * 4 = 400 MHz

The DRAM works in DDR mode so we can say: 400 MHz * 2 = 800 MHz and we can disable the second channel of the 64 bits Yellowstone memory controller...
 
Panajev2001a said:
jvd... no what nVIDIA did proove is not to go with the wrong chip supplier... I think Sony, IBM and Toshiba trust themselves a bit more than what nVIDIA trusted and trusts TMSC as far as state of the art manufacturing processes go... infact nVIDIA has decided to pay the heavvy tab for having IBM manufacture their high-end chips because they can deliver.

Cell fit to 90 nm would butcher its performance ( and still raise the costs to the roof becuase of enormous chips' size ) quite a bit and moving it to 65 nm would only save costs, it would not give us performance back...

The manufacturing process that will be used to manufacture PlayStation 3 chips will also determine the specs of PlayStation 3 chips and the next die-shrink will be the one that cuts manufacturing costs...

What makes you think IBM is better an manufacturing complex GPUs on a mass scale than TMSC?

Also what makes you think Nvidias chip design wasn't the problem? Ati doesn't have the same problems with their designs.
 
What makes you think IBM is better an manufacturing complex GPUs on a mass scale than TMSC?

Are you serious ?

POWER 4, the upcoming PPC 970, the upcoming POWER 5, etc...

Also do you see TMSC having 90 nm ready now ( like Intel does ) and 65 scheduled to arrive in 2005 ?
 
Panajev2001a said:
What makes you think IBM is better an manufacturing complex GPUs on a mass scale than TMSC?

Are you serious ?

POWER 4, the upcoming PPC 970, the upcoming POWER 5, etc...

Also do you see TMSC having 90 nm ready now ( like Intel does ) and 65 scheduled to arrive in 2005 ?

You think they're as complex as the GFX? BTW I didn't know IBM and Intel are the same company. Putting two simple cores into one die isn't necessarily more complex than a single complex core. You think TMSC couldn't put two simple cores into a single die?
 
To move further down its technology roadmap, TSMC said it plans to insert several brand new and yet-to-be-tested tools into its fabs to enable the 65-nm node. These new systems include atomic layer deposition (ALD) and next-generation lithography (NGL) equipment. Its current 65-nm roadmap calls for 157-nm scanners and, surprisingly, electron-beam projection lithography (EPL).

TSMC's aggressive process-technology roadmap underscores a massive change within in the overall status of the foundry industry,

.....


But being on the "bleeding-edge" of technology has its risks too, especially in the development untested, next-generation processes and tools, Hu noted



65nm doesnt sound like a walk in the park imho. Sony is putting big eggs in Cell.
 
Obviously Sony and friends researched it pretty good, and they are going to put it into action.

Sony doesn't have competition like Intel does with AMD, they can take their sweet time doing things like this. Noone is rushed them to make any kind of upgrade to 65.
 
You think they're as complex as the GFX? BTW I didn't know IBM and Intel are the same company. Putting two simple cores into one die isn't necessarily more complex than a single complex core. You think TMSC couldn't put two simple cores into a single die?

Sounds like you're confusing the various types of complexity. Don't let transistor count confuse you. CPUs are definately more complex, just look at the amount of work goes into a CPU. They're more complex in the types of problems their solving and the work that goes into design them. GPUs have simpler problems being solved in them.

GPUs involve a lot of circuitry, that doesn't mean it's complex, just a lot -- this also doesn't it's trivial. ;)

Think about this, Multiplying a 100 digit number with a 100 digit number is no more complex than a 1 digit number by 1 digit number, the only difference is the amount of work involved.
 
You think they're as complex as the GFX?

I think you're right. In term of design complexity, something like Broadband Engine or Power4 is actually alot more simple, compare to something like GeforceFX or NV next gen GPU.

That's why when NV blamed TMSC for NV30, I think its only half of the story. And IBM into the picture, is not a sure way of solving the problem either.
 
That's why when NV blamed TMSC for NV30, I think its only half of the story. And IBM into the picture, is not a sure way of solving the problem either.

I don't think TSMC did anything, I think Nvidia screwed up. The clockrates being achieved by the 9600 make me think NV30 is the problem. I'm pretty sure NV would blame TSMC either way, since they don't want to take the flak and make themselves look bad.

I think you're right. In term of design complexity, something like Broadband Engine or Power4 is actually alot more simple, compare to something like GeforceFX or NV next gen GPU.

I don't see how GPUs are more complex, the problems they're solving IMHO aren't more difficult at the logic level, they're definately not more difficult at the transistor level -- CPUs use a lot of custom stuff here rather than libraries. I think the large transistor count makes everybody go, yup must be more complex.
 
Both Intel and IBM have NOW 90 nm technology and even by the most optimist people TMSC has been called ~6 months behind those top guys.

TMSC last I heard announced they were slowing down due to the MASSIVE cost for them to upgrade to new technologies like 90 nm and beyond...

If you seriously think a GFX is more complex than a Pentium 4, Itanium 2, EV7, Opteron, POWER 4... you are seriously MISLEADED...

GPU have the advantage of working in a single arena, 3D graphics... they can have lots of parallel units because the code they work on has a quite high level parallelism and that basically can bear much better latencies and the GPU pipelines can bear deep pipelines as branch prediction is not much of a problem...

GPU

Wow... GFX does branching by predication... WOW, incredible... So what ? Branch prediction HW ? Nada... LOAD/STORE re-ordering ? Nada...

What about dynamic branch prediction without predication achieving >95-97% of correct branch target prediction ?

Deep Out Of Order execution engine ( think 128 instructions in flight like the Pentium 4 ) ? nada...

Do you think nVIDIA could design SRAM cells to achieve a 16 KB cache with 2 cycle latency and that can scale up to >3 GHz ?

Do you think nVIDIA could even design a complex and high performing general purpose processor like a Pentium 4 or even an Itanium 2 ?

Would nVIDIA even has the compiler coders of the same level Intel has ?

GFX has ~100 MTransistors... and it runs at 500 MHz right ? Pentium 4 Prescott runs at 3+ GHz and has 100 MTransistors too and it is expected to scale till 4-5 GHz...

Simply dealing with x86 instructions is a pain and that is only the beginning...

I would hold a contest if I could... give two product generations to Intel and nVIDIA + TMSC... Intel would have to produce a next-generation high-end DX9 GPU and nVIDIA would have to produce a Prescott beater ( also the nVIDIA CPU would have to support the full IA-32 instruction set )...

Let me say that I do not see nVIDIA beating Intel in the challenge even with two tries...

No, allying with AMD is not part of the challenge :p
 
I don't see how GPUs are more complex, the problems they're solving IMHO aren't more difficult at the logic level, they're definately not more difficult at the transistor level -- CPUs use a lot of custom stuff here rather than libraries. I think the large transistor count makes everybody go, yup must be more complex.

Its the approach. If CPUs has a GPUs approach to it's problem, you won't need a GPU at all, because CPU would be as fast.
 
Status
Not open for further replies.
Back
Top