One dev's view: X2 a disappointment

Status
Not open for further replies.
thanx for the heads up ....

on that post by the dev.:

- IMO the shared L2 cache is not bad.. if it's anything like POWER5 's shared L2 cache ( http://www.aceshardware.com/read.jsp?id=65000325 ), it should be ok .... the only thing is I think it is too small .. a 1MB L2 cache shared between not only a tri-core CPU, but the GPU as well, is too small IMO..



- the comments that creating cross-platform games "will likely be damn-near impossible" for the next gen. are interesting... IGN made similar comments,
...the platform requires third party exclusives. There's an upside or a downside here, depending upon how you look at it. If Revolution is as different as Nintendo says, it may prove incredibly difficult for developers to port over their multi-console games, which could be disastrous for the machine. On the other hand, if Nintendo actively seeks exclusives designed from the ground up for Revolution, even offering to fund some of the projects, the strategy could pay off in full, separating the console from the others and simultaneously giving players more reasons to pick it up.
[source: http://cube.ign.com/articles/587/587028p2.html ]



- the comments that PS3 will be the most powerful of the 3 consoles fits with what EA commented,
"We know maybe what the PS3 will do, but we can only guess," said Rory Armes, studio general manager for video game giant Electronic Arts in Europe.

"It's a horrendous effort in the first year," he admitted.

Microsoft had delivered development kits to EA, said Mr Armes, but he said the company was still waiting on Sony and Nintendo to send kits.

Although the details may not be nailed down, Mr Armes said EA was beginning to get a sense of the capabilities of the new machines.

"The rumours are that PlayStation 3 will have a little more under the hood [than Xbox 2]," he said....

..."Microsoft is obviously a software company first and foremost, while Sony has more experience in hardware. I think Sony will be able to push more into a box at cost."
[source: http://news.bbc.co.uk/2/hi/technology/4201391.stm ]










(I first heard about that BBC article from this thread: http://www.beyond3d.com/forum/viewtopic.php?t=19962 )
 
Almost no one replies, but over 300 views.

Does anyone agree with the following statement:

"The fact that they choose to centralize their FSB or share a single L2 cache among 3 processors shows some real lack of insight. The biggest flub would have to be that 10 MB eDRAM on the GPU -- which I'm told is really MS's idea (both MS and ATI told me that much) -- that just says they didn't even think about resolution."
 
um... I thought 10 MB was going to be enough for 1280x720, and that the video hardware rendered into tiles in some fashion (not deffered)?

Perhaps he wanted more cache for all the CPU's.
 
bbot said:
Almost no one replies, but over 300 views.

Does anyone agree with the following statement:

"The fact that they choose to centralize their FSB or share a single L2 cache among 3 processors shows some real lack of insight. The biggest flub would have to be that 10 MB eDRAM on the GPU -- which I'm told is really MS's idea (both MS and ATI told me that much) -- that just says they didn't even think about resolution."

Eveything Deano has hinted about the Xbox2 CPU is that it's nice and fast. So I'm NOT(OOPS) entirely worried that sharing a single L2 cache among more than 1 processor is not a good move. As for the embedded DRAM, I was under the impression that a 720p single frame could fit in about 10.5mb. So I'm not entirely worried about the 10mb either, provided the GPU also supports a UMA too.

Tommy McClain
 
bbot said:
Almost no one replies, but over 300 views.

Does anyone agree with the following statement:

"The fact that they choose to centralize their FSB or share a single L2 cache among 3 processors shows some real lack of insight. The biggest flub would have to be that 10 MB eDRAM on the GPU -- which I'm told is really MS's idea (both MS and ATI told me that much) -- that just says they didn't even think about resolution."

fake
 
IMO there's a lot of shit in that post. I can see MS driving more on the CPU, but the graphics is primarily the design that ATI had in place. There is no way that MS would pick an arbitary quantity of EDRAM and even less of ATI just implementing it because MS told them to.
 
Wait, this cpias<something> guy is the same fellow who claimed SCEA let him see their prototype PS3 a few months back(posted here), right? Regular poster at PSINext, the former PS3Insider.

Last we discussed about his posts we were divided over his authenticity...
 
Jaws said:
And FWIW, I've exchanged posts with him not only on psinext.com but also on flipcode.com and seems a credible and sincere guy.
But isn't Cpiasminc the guy who witnessed a Cell prototype machine at 1Ghz in mid 2004? :LOL: He seems very knowledgeable on technical matter but sometimes goes astray in a daydream...

Cpiasminc said:
BTW, I should also note that based on what I'm hearing from every studio I've been to, I'd have to say that, at least for the first generation of games on next-gen consoles, you will not see anything running at 60 fps. There is not one studio I've talked to who hasn't said they're shooting only for 30 fps. Some have even said that for next-gen, they won't shoot higher than 30 fps ever again.
 
one said:
Jaws said:
And FWIW, I've exchanged posts with him not only on psinext.com but also on flipcode.com and seems a credible and sincere guy.
But isn't Cpiasminc the guy who witnessed a Cell prototype machine at 1Ghz in mid 2004? :LOL: He seems very knowledgeable on technical matter but sometimes goes astray in a daydream...

Cpiasminc said:
BTW, I should also note that based on what I'm hearing from every studio I've been to, I'd have to say that, at least for the first generation of games on next-gen consoles, you will not see anything running at 60 fps. There is not one studio I've talked to who hasn't said they're shooting only for 30 fps. Some have even said that for next-gen, they won't shoot higher than 30 fps ever again.


You are welcome to believe what you want but you are not privy to all relevant information. And another FWIW, aspects of CELL have transpired to be true that were against certain beliefs of this forum well before any of the ISSCC revelations.

You can take my word for it or not...but I'm stronly in favour of MfA's post above wrt this thread.
 
DemoCoder said:
10mb eDRAM is not enough for 720p double buffered or FSAA'ed and if using HDR, forget it.

Doesn't need to be enough for double buffer. Only the back buffer needs to be in the cache and copying the frame buffer once every frame out is easy to do. There is no reason the front frame buffer needs to be in the GPU's cache since it shouldn't be touching it.

And not sure whats wrong with it for HDR or FSAA? You can fit either a 720p HDR (16 bit floating point) or a 2x 720p (8 bit integer components) fine. Depending on how many bits your Z Buffer is could determine whether you can keep that in memory entirely at all times but with enough cache to almost allow for it there should be only a minor performance hit from using HDR with a slightly more of a hit due to transfers for FSAA.

A 16 bit floating point framebuffer with 24 bit Z Buffer with 8 Bit stencil buffer fits in at just over 10.5mb for 720p
 
Jaws said:
And another FWIW, aspects of CELL have transpired to be true that were against certain beliefs of this forum well before any of the ISSCC revelations.

Really? Can you be a bit more specific on those aspects?

EDIT: BTW, this old thread in the Beyond3D forum is about what the guy called cpiasminc said before about the sample Cell. Sometimes his statements seem right, but only as a pile of good extrapolation by already available info. If you know something that confirms he could bring something real before ISSCC, I'd like to know it more than anything else Cell-related. Only if it's not under NDA as cpiasminc claims, that is ;)
 
one said:
Jaws said:
And another FWIW, aspects of CELL have transpired to be true that were against certain beliefs of this forum well before any of the ISSCC revelations.

Really? Can you be a bit more specific on those aspects?

EDIT: BTW, this old thread in the Beyond3D forum is about what the guy called cpiasminc said before about the sample Cell. Sometimes his statements seem right, but only as a pile of good extrapolation by already available info. If you know something that confirms he could bring something real before ISSCC, I'd like to know it more than anything else Cell-related. Only if it's not under NDA as cpiasminc claims, that is ;)

Why? What difference would that make? I've stated my view in the original thread above and this thread too. But I'm kinda cheesed off at all this knee-jerk BS remarks so I'll eleborate...

1. No multiple PE IC's. Just single CELL IC's in 1, 2 and 4-way configs.

2. No eDRAM, just XDR.

3. No custom Sony or Toshiba GPU, but an off-the-shelf GPU (assumed to be nVidia now) before the nVidia press releases and the ISSCC previews last year.

Now who's to say I'm not typing BS above? Or wasn't my original word good enough? <Shrugs>
 
Cryect said:
And not sure whats wrong with it for HDR or FSAA? You can fit either a 720p HDR (16 bit floating point) or a 2x 720p (8 bit integer components) fine.

You are grasping at straws. 1280x720 RGBA+24/8 z/stencil eats up 7.4mb alone. Come on, 2x FSAA ? First of all, it won't fit in 10mb, even if you drop the alpha channel and stencil. Secondly, most people would assume a next-gen system should be able to handle 4xRGMS minimum, and now your talking almost 30mb worst case compression. A HDR framebuffer would kick that up to 45mb worst case compression. If X-Box2 can't do what an X700 or GF6600 can do FSAA wise, or even an X-Box 1, then it sucks, plain and simple.



A 16 bit floating point framebuffer with 24 bit Z Buffer with 8 Bit stencil buffer fits in at just over 10.5mb for 720p

10.5 > 10.0. No space left for texture cache, vertbex buffers, "unified shaders", render-to-texture ops must be copied to main memory, in fact, pretty much everything must go to main memory, except blending ops. Whoop de do.

I expected more from a next-gen console. Frankly, the Xbox2 is looking more and more disappointing as time goes on.
 
DemoCoder said:
10.5 > 10.0. No space left for texture cache, vertbex buffers, "unified shaders", render-to-texture ops must be copied to main memory, in fact, pretty much everything must go to main memory, except blending ops. Whoop de do.
eDram = transient render buffers.
Mainmemory and L2 cache = texture & vertex storage.

Texture&vertex caches are obviously still there, just like in any other ATI GPU - as actual caches. And all render to texture ops go to main memory in Xbox and GC also, so that's nothing new (and if the memory design allows stall free copying out of eDram, it will be just fine).
 
Jaws said:
Why? What difference would that make? I've stated my view in the original thread above and this thread too. But I'm kinda cheesed off at all this knee-jerk BS remarks so I'll eleborate...
Well, I should have been more specific too, are those points based only on his own educated speculation, or based on the info the guy was given from SCEA or others involved? If the guy said "GPU is not in-house, because it's the fact", then OK, but if he said "GPU is not in-house, because it's more cost-effective or powerful", then it's just a speculation no matter how it's true eventually.

AFAICS, the "Xbox2 not very good" thing came from what he could gather from developers he met, except for his subjective view on "Xbox2 is PC-like". If it's only by his speculation, no one had cared about like that on other boards. Likewise, can you say those points about the Cell processor and GPU details the guy could predict didn't come from his good guess but from some internal info? It's a very important distinction, IMHO, as what cpiasminc wrote so far is actually very interesting. :)
 
Status
Not open for further replies.
Back
Top