Bohdy said:Ya, for one thing you can't rotate your camera in the original.
In first person view I think you can.
Bohdy said:Ya, for one thing you can't rotate your camera in the original.
TheChefO said:In first person view I think you can.
Bohdy said:Whoops, you are right. Is that what you meant Shompola?
I've seen the framerate drop before, usually do to too many transparencies, but I still don't recall seeing screen-tearing in the original. Maybe my memory is just faulty.
I've owned and played all the way through Metal Arms for both Cube and Xbox. I never noticed any frame tearing in the Cube version or even that much slowdown. The Xbox version was a mess, comparitively. There may have been a little in the Cube version, but it wasn't game-breaking the way the Xbox version was.Shompola said:Metal Arms Glitch In The System multiplatform
PeterT said:Not if you use triple buffering, which is what Mintmaster suggested.
...
I have never heard of any console game that uses triple buffering though. In my opinion, it wouldn't make much sense on a fixed platform where you can be absolutely sure that you reach X fps with Y load. (Of course, on last-gen consoles the graphics RAM cost would probably also have been prohibitive)
Shompola said:As I have said, Ninja Gaiden it rarely happens. And sorry I should have clarified that I ment rotating the camera in first person mode. But the game definitely has v-synch off as I have experienced the screen tearing and I am not the only one who has documented this experience. Have you guys experienced the framerate going into singel digits?
Metal Arms, I havent played the GC version. I however played the Xbox version and the screen tearing is severe. It is hard to not notice it.
Shompola said:PAL for Xbox games. I think I ran the games PAL 60?
Well, aside from the amount of memory used, I don't see why it should tax the system more than double buffering. The number of operations should be exactly the same.hupfinsgack said:Well, you would take a performance hit anyway, I suppose, since triple buffering would tax the system performance as well, wouldn't it?
PeterT said:Well, aside from the amount of memory used, I don't see why it should tax the system more than double buffering. The number of operations should be exactly the same.
Are there any recommendations for consumers to possibly alleviate the problem?
Chris Satchell: There isn’t anything I would recommend for consumers that can help alleviate the problem. I think when gamers get their hands on this years 2nd generation Xbox 360 titles they will be blown away with the visual excellence.
Tap In said:sounds like it is something we will be seeing less of as the system matures.
Not at all. Switching a buffer is effort at all. All you're doing is rendering the same number of frames per second but with an extra screen kept in memory.hupfinsgack said:Well, as far as the X360 is concerned 720p with 2xAA doesn't fight in the eDram, IIRC, having another 3.7 MB taken, would mean more tiling and lower performance wouldn't it?
Shifty Geezer said:Not at all. Switching a buffer is effort at all. All you're doing is rendering the same number of frames per second but with an extra screen kept in memory.
If we go with some hypothetical figures, let's say the screen is refreshed once every 15 ms, and what yout game is producing a new screen on average every 14 ms but sometimes as long as 17 ms.
In double buffering, you're viewing one screen (front buffer) while the backbuffer is being drawn. It's best to refer to them as buffer A and B, as they change from being Backbuffer and front.
Now if drawing the next frame into Buffer B takes 12 ms and you don't wait for the vertical refresh, the B buffer can start being drawn before the original A has finished which produces tearing. If that frame takes 16 ms to produce, the second frame begins drawing what in buffer A again before the new screen in B is ready, also resulting in tearing. If you enable vertical lock, B won't be drawn until A has finished drawing which means no tearing. But if B takes 12 ms to produce, and A takes 15ms to draw, you're graphics engine is sitting idle for 3 ms. And if B takes 16 ms to fill, you have to wait for A to be drawn a whole second time, which means a whole frame delayed and a frame dropped later.
With triple buffering we add a Buffer C. That way we draw into buffer B while A is being output to screen, then switch to draw in C while B is being displayed, then switch to rendering to A while C is being output. This means no wasted time waiting for the refresh. Every screen is finished as fast as it can be while the next screen to be output can wait for the vertical sync meaning no tearing and if manged properly, rock-steady framerate. But they will be a frame's lag on the graphics which might impact the experience especially on fast games, and you will need a screen's buffer worth of memory, which at 720p 32bit per pixel would be about 3.5 MB. Other than that there's very little overhead. The rendering engine is exactly the same drawing the same as with double buffering.
TheChefO said:Could be why we're not having the same vsync problems. I'm NTSC and while I won't say there isn't vsync problems w/ NG on ntsc xbox, I will say if they were there, I never noticed them. Or perhaps they were just more severe on Pal due to refresh differences.
chroniceyestrain said:I'm NTSC as well and there is definately tearing in the first NG.
hupfinsgack said:I am getting how tripple buffering works after reading PeterT's post. However, I don't get where the memory is allocated:
So let's say buffer A, B, C each have 3.5MB.
If A is the front buffer does that mean it resides in the eDram, while B and C are both in the system ram or are all three of them located in the eDram?
pipo said:The box takes complete frames from system ram. There doesn't have to be a complete frame in the 10MB at any time AFAIK...
I'd like to look at the eDRAM as a very fast scratchpad for what that's worth.