So is PS3 GPU one generation beyond Xenon graphics chip?

I believe PS3 will launch on 65nm process. then be moved down to 45nm in 2007. then down to 32nm or 35nm or whatever the next step is from 45nm, sometime in 2008-2009 timeframe. I'm merely guessing that PS4 will be on 25nm or smaller, if that is possible, sometime early in the next decade (2011-2013)
 
Mulciber said:
Jov said:
Mulciber said:
no, that was not "the" arguement. that was the arguement that Vince put forth. considering the nature of what is being argued, clockspeed and transister count both play a crucial role in determining how quickly a chip can be put into a commercialy viable product.

So you want to factor in process size (@90 nm) and speed (xHz)?

Firstly, when comparing the EE+GS to the P4 is already "apples vs oranges".

um, did you miss the point of my question to Vince entirely, then?
that is exactly what i was pointing out

Re-read your question, and I can't answer for Vince, but personally I just see it as another challenge for Intel when dealing with such high speed as it was a different challenge for Sony to integrate the EE+GS. Hence its still "apples vs oranges".

Mulciber said:
Secondly, haven't AMD (and Sony and others) proved it’s not just the speed of the chip, but the performance? At lower Hz some chips still blow those a lot fast out of the water under certain conditions.

I dont care to look up the flops and gips counts of the p4 or the ee, but my guess is the 3.8ghz p4 is still faster than a 200mhz ee.

Quick google and we get:

EE (core + V0 + V1) = ~ 6.2 GFlops (single precision)

P4 @ 3.4GHz on 90nm = ~ 13.6 GFlops (single precision)

So what's your point? EE+GS @ 90nm was bound to the initial target set back in late 1990s during its design, whereas at the time when Intel transit to 90nm it was pushing the limits with Hz?

Edit: We don't know if Sony could have made the EE alone earlier or faster if the goals were different as the same apply to Intel P4 @90nm. All we can take out of it is which product was first to market at 90nm without factoring in other variables which could have delayed them.
 
They're very different chips.

Although it's kinda funny to see the EE @ 300MHz pushing 6.2 GFLOPS and a Pentium at ~3Ghz pushing 13GLOPS. 10 times the clockspeed for double the FLOPS performance...

But they're very different chips, so the comparison has no point.
 
Heh. Who brought Intel which is not in a console in the next gen? It's aaronspink screaming "Intel is da king in da process technology dammit!" and it's greatly irrelevant here :rolleyes:

Also, if you give Intel 1up for making 3Ghz chip in the 90nm process, you should give SCEI 1up for mixing eDRAM in a chip in the 90nm process. Needless to say, the latter is the more difficult challenge in terms of the process technology.

BTW, 3Ghz and 90nm have no relation to each other - Intel expected shrinking it to 90nm would resolve the heat problem in Prescott, but it was a mistake and they had to redesign their release plans, not the process technology. Pentium4E in 90nm underperformed Pentium4 in 130nm IIRC.
 
Megadrive1988 said:
I think both Xenon and PS3 GPUs will be similar in many ways. both should be capable of peak vertex performance in the low billions and in-game real-world performance of hundreds of millions of vertices/polygons with lighting
The GPUs could be a lot different if the Vertex Programs are handle solely by the CPU on the PS3, and if the XeVPU is indeed built using unified architecture (Judging by what we know the XeCPU would also be used for the vertex, though). But that doesn't change the point you made, which is that the two machine, would be more or less in the same ballpark.

BTW, and that's slightly off-topic, but the more i hear about next-gen games, from various statements to actual screenshots, and the less i believe we'll see games with more than 1M polygons per frame (@60Hz, of course), especially the first batch of games.
 
Vysez said:
BTW, and that's slightly off-topic, but the more i hear about next-gen games, from various statements to actual screenshots, and the less i believe we'll see games with more than 1M polygons per frame (@60Hz, of course), especially the first batch of games.

Getting pessimistic huh?

I'm kinda like that, but more on the side of "the first next gen games will probably have more than 1M polys per frame with 23 shader programs for each pixel, but it will still look like tacky Z-movie kind of crap. *cough*DOOM3*cough...."

That is, until the real games come out. I can only imagine what Temco, Sony's first party teams, Polyphony Digital, Konami will throw at us...
 
one said:
Heh. Who brought Intel which is not in a console in the next gen? It's aaronspink screaming "Intel is da king in da process technology dammit!" and it's greatly irrelevant here :rolleyes:

Personally I don't ignore those with unique industry insights, whatever they are saying...
 
DaveBaumann said:
one said:
Heh. Who brought Intel which is not in a console in the next gen? It's aaronspink screaming "Intel is da king in da process technology dammit!" and it's greatly irrelevant here :rolleyes:

Personally I don't ignore those with unique industry insights, whatever they are saying...

I don't ignore them either, but I still debate them if I am not convinced... how else am I going to learn ;) ?
 
Panajev2001a said:
I don't ignore them either, but I still debate them if I am not convinced... how else am I going to learn ;) ?

You shouldn't question the omniscient deities Pana, fucks with your karma (as you can see).
 
london-boy said:
Getting pessimistic huh?
Yep, and it's kind of a good thing, since I could only be pleased if things turns better than I expected. Nonetheless, i heard some numbers (polygon numbers) for a few games, and that's low... I'm mean really.
As you said thoses polygons will features tons of shaders, but as you also pointed out, fragment programs could easily be irrelevant, and uselless if they're not coupled with a good amount of polygons... In my opinion.
 
london-boy said:
That is, until the real games come out. I can only imagine what Temco, Sony's first party teams, Polyphony Digital, Konami will throw at us...

Well, Tecmo has supposedly been working on Code Cronus for 2 years already. It will be about 3 by the time Xenon is released. It's very likely that we'll see something very impressive at (or very near) launch. I suspect the game will be very much in the modern Tecmo tradition (Fighting or Hybrid-on-rails), so it ought to be pushing the polys pretty hard, including every 'Special Effect' they can think of. It will most likely be the best looking game on the planet for a few months.
 
Mulciber said:
I dont care to look up the flops and gips counts of the p4 or the ee, but my guess is the 3.8ghz p4 is still faster than a 200mhz ee.
That would be 300mhz, and care to tell me how that is even relevant? P4's clock wasn't frozen for the last 5years, or else you would be comparing EE to a 1.6Ghz P4...

Mind you, EE already had overclocking headroom back when it was at 180nm, and IIRC GSCube parts were in fact clocked at 500mhz (at 180nm as well).
90nm part could clearly clock higher yet, if there was any need for it.
 
Panajev2001a said:
I don't ignore them either, but I still debate them if I am not convinced... how else am I going to learn ;) ?

Of course, but dismissing things as "irrelevent" is not condusive to a learning process.
 
Fafalada said:
That would be 300mhz, and care to tell me how that is even relevant?
I'm astounded that people are still missing his point. Sony may have gotten their 90nm part out first, but it may well have also been a far easier part to develop and manufacture.
 
Fodder said:
Fafalada said:
That would be 300mhz, and care to tell me how that is even relevant?
I'm astounded that people are still missing his point. Sony may have gotten their 90nm part out first, but it may well have also been a far easier part to develop and manufacture.

We all got his point the second time he repeated himself. 83 times ago. :devilish:
 
DaveBaumann said:
Panajev2001a said:
I don't ignore them either, but I still debate them if I am not convinced... how else am I going to learn ;) ?

Of course, but dismissing things as "irrelevent" is not condusive to a learning process.

Oh, is that the case? :oops: IIRC (I won't bother to look up sorry) in a few pages before aaronspink basically wrote that all Sony's PR are full of hype and irrelevant to a serious discussion. 65nm - 45nm is a talk of future and can be a target of speculation but he extended it to 90nm and brought Intel in then wrote off 90nm facts as 'Sony hype', which is a very stereotypical and illogic jump IMO. Hope he learns something as I do in using Google :p
 
I do believe the question was concerning process technology and not the implimentation of it. Untill Aaron jumped in, namely:

aaronspink said:
Sony has a habit of embelishing their technology. Lets state a positive. I am positive that no one will beat Intel to a process node.

Which is unambiguiously false. Concerning "embelishing," this comment is something expected from PC-Engine or Deadmeat. Chipworks cleared this up well when they stated:

  • Chipworks commenting on CMOS4 said:
    The combination of this dielectric process and the smallest transistor seen so far by Chipworks makes this one of the most advanced processes in volume production today.
<center>
sony_90nm.jpg
Intel_90nm.jpg

Sony CXD9797GB -- (via Chipworks) -- Intel "Prescott"</center>

And then concerning the TTM aspect, Sony had a 90nm fabricated product on sale almost 6 months before Intel came to market in Febuary of 2004. I'm still waiting for Aaron, whom Dave thinks we should all listen to and never question, to tell me what 90nm part from Intel I could buy in the fall of 2003. Anytime you're ready, just post away.

Concerning implimentation which IS irrelevent to the base technology discussion, I think Faf's answered this aspect quite well. The Emotion Engine is a static design, the subsequent work on the design was to maximize yeilds and profitability, not to maximize preformance.
 
Fodder said:
I'm astounded that people are still missing his point. Sony may have gotten their 90nm part out first, but it may well have also been a far easier part to develop and manufacture.
People got his point, that's not the problem, what people didn't understand is the relevance of that point with regards to the subject, the 90nm processes from Sony and Intel.
Both EE+GS and the P4 Prescott from an architectural point of view are already in the Apple VS Orange land, pointing out another difference between the chips is not susceptible to change the discussion much, at this point.
 
Intel certainly think they were first (not that, that is any real evidence of course :)
"Intel Pentium 4 processor on 90nm technology has the distinction of being the world’s first high-volume processor on the new technology"
http://www.intel.com/technology/itj/2004/volume08issue01/foreword.htm

So it looks like Sony claims they were first and Intel claim they were first. Bit of a stalement then so lets have some fun...

I conclude that both Sony and Intel chip engineers suck whereas many others are good.

Why?

I see no pretty chip art from either Sony or Intel, miserable gits :)

http://www.chipworks.com/gallery/gallery_home.asp
 
Back
Top