Indeed. eDRAM is not a limit.
Forgive me but the tiling it isn't a limit? Sony has eliminate just for these type of problems. The edram give advantages but limits of course.
Indeed. eDRAM is not a limit.
tiling has its trade off that sure. As for Sony choice to pass on Edram I'm not sure it was a choice to begin with, Nvidia didn't had time to do something as custom as Xenos.
Forgive me but the tiling it isn't a limit? Sony has eliminate just for these type of problems. The edram give advantages but limits of course.
Well not ever edram have so great pro but ok I know most of opinions tend to prefer, but personally I think just depend. Rsx a part, the lack of edram not seems casual, or for not enough time .tiling has its trade off that sure. As for Sony choice to pass on Edram I'm not sure it was a choice to begin with, Nvidia didn't had time to do something as custom as Xenos.
I think the pro in Edram are greater than the cons.
Joker your "championing" for the 360 and tone of your post sounds more "fanboy-ish" than many posts I've read on this forum. You're one of the few folks on here qualified to give a valid opinion based on your tangible experience working with both consoles... hence no need to stoop to "their" level with posts like that.
In any case... I think the quote from the capcom dev could possibly have simply been misinterpreted...?
assurdum said:Forgive me but the tiling it isn't a limit? Sony has eliminate just for these type of problems. The edram give advantages but limits of course.
But there is no need for it on this forum, because the words "can't" and "ps3" are never used in the same sentence except by me it would seem. The 360 fud fest though is off the hook.
In both of the above cases your buffers in main memory are all the same, and there is no pixel hit between the two. The tiling 'limit' comes from the objects that straddle the tile boundaries. In case #2, the first render pass was pixel rows 0 to 511, the second pass was pixel rows 512 to 719. For objects that are completely in one tile or the other there is no rendering cost. You have to calculate cpu side what tile they belong in and set it, but that's very cheap to do. The problem is with an object that occupies say pixel row 500 to 600.
The Framework engine and the 360 are an odd pair. Grandmaster has seen tearing at 720p that wasn't present in the same scene at upscaled 1080p.
Only regards triple, quadruple, quintuple etc. buffering. For other aspects of image rendering eDRAM offers tradeoffs and so some limits, like all solutions (curse our finite hardware! )Forgive me but the tiling it isn't a limit?
It's the other way around. More tearing noticeable with the upscale.
Also curious is that in my tests I've found instances of tearing at 720p rendering which did not tear in the same sequence running at 1080p.
First of all thanks a lot for these interesting posts. I have small question about tiling on the 360. Are tiling engines using a static or dynamic approach as in, are the tiles (vertical/horizontal and tile sizes) chosen and set in stone or are there cases where the engine would chose the tiles' size and geometry based on scene geometry and complexity?
I'm trying to figure out if the cost of deciding the optimum tiles pays off in performance boost vs just choosing a static tile setup.
If you looked back farther in the FPS analysis thread, you'd find several reports of prolific 1080p tearing compared to little on 720p.
But ok.
Hey Cory, your signature needs to be updated
I think what he was getting at is in order for everyone to think that it was a good port everything would be equal including frame rate, texture quality so on and so forth. The easiest solution would be to lower the res of the textures on the 360 and lock both games at 30fps or lower the resolution of both version and lock them at whatever framerate you are able to get. The funny thing about doing that is I wonder which version would get blamed for holding the other one back.Well, they've already reduced the texture resolution by a lot... why not just use lower res buffers?
1) we need 60 fps
2) we need lots of overdraw
3) we need the overdraw bits to maintain their detail
4) we need the overdraw bits to linger on screen a long time with said detail
Given that, how do you guys propose to solve the above scenario on Bayonetta even though the 360 has an order of magnitude more bandwidth to the frame buffer for transparencies compared to the PS3? I normally wouldn't ask, but since it's already been binned as a bad port with such confidence, that means I presume you guys know of a solution. I'm sure you know that Edge wouldn't help here, and that you can't use low res buffers for transparencies and still maintain detail. So how would you fix this bad port?