The Game Technology discussion thread *Read first post before posting*

Sorry to derail the thread guys. Lets hope joker will be willing to give us a few nuggets of info regarding the 360 and this pseudo triple buffering.
 
Tech is fact, not opinion. You can prefer one design over another, but there are only cold hard facts when it comes down to it. Joker wasn't giving his opinion based on experience, he was telling facts based on it.
I think it's ok to discuss. Nobody from my pov said that Joker is dishonest or anything is one of the best thing that happen to this forum his freedom of talk is more than welcome.
Joker has been working for a while on the ps3, he may have miss something in regard to triple buffering but that doesn't invalidate his points. Double buffering + v-synch may be enough in most case good, in fact double buffering without v-synch is in most case is enough. And basically few people care even among devs and thus they don't even know if it's plainly supported.

Point is that the link bought up by Alstrong states that Ms allows for some :
Native 360 games have access to a kind of weird not-really-triple-buffering mode that some games use, but which is not available through the XNA Framework.
It doesn't sound like true triple buffering. It sounds pretty much like an API issue. I'm willing to know what Joker or the others devs will have to say about that. Not to mention that this implementation may have the job done properly (whether devs use it or not and why is another issue).

Other than that, I still get what somehow bother you, whether the API allow triple buffering or not is a software thing some members here are clearly trying to make this a hardware for some reasons... it's clear that it's not as nobody can rise a single real evidence about why it could not be implemented (if it's not). There is nothing magic about it, if it's not part of the API then Ms would have to be asked for. As they state in various presentation, the APi is evolving, blabla, we could consider that etc. if you have good arguments. etc.
 
Last edited by a moderator:
Maybe I am taking it the wrong way, who knows. This is just too often the case with joker because of his preference for 360 development. It's not like he's been returning to the thread to defend his position.

Last I remember he was applying for job as a PS3 programmer, so he can be all ga ga about having to write for the X360.

Additionally you ignore the "facts" that other 360 developers coding on the XNA platform and posting on the XNA forums, seem to be saying something contrary to what Joker initially stated.

XNA is the same dev tools that the pro shops use to develop for the X360?

If you prefer the 'facts', shows a tendency of ps3 to have triple buffer without problem has the same level of tearing in a 360 double buffer game, so the 'facts' seems to indicate triple buffer is more suitable on the ps3.

English ain't my native language, so not sure what is exactly the point of those sentences.

But my pov of them are that they did triple buffering to get on par with the X360 that did double buffer and v-sync.
So basically with the PS3 running just double buffering and v-sync the same game had more screen tearing than the X360 version ie they had to do triple to get it to acceptable levels.

Or am I way of mark now?
 
Tbh, when I found the xna thread, it was just interesting to discover that there was an alternative to triple buffering that the 360 devs have access to. Triple buffering is triple buffering. Whoopedy do. Devs make decisions. What people prefer is not supposed to be the nature of the thread though.

I'd just like to know what this pseudo triple buffer is. :)
I've an idea but I'm not sure it make sense... may be it's based on me misunderstanding something.
Anyway I'll try to explain what I'm thinking about.

I speak of a double buffered game with v-synch trying to run @30 fps. Basically a frame is send to thye RAMDAC every 33ms. I will say that display/screen refresh rate is 60Hz. Every frame is displayed twice.
If I understand properly if a frame is not completed every 33ms you have to drop a frame. Basically the work done is lost. If you render faster than that then you lost precious cycles.
I've imagine that it could be possible instead of losing the job done to let the render runs for an extra 16.7ms. So the last frame will be displayed three times (it's better than 4 times which is what happens when the frame that was not completed in time is dropped).
It still has some short coming in regard to triple buffering, say the frame took five more 5ms to complete your losing 11ms of rendering time, in case of triple buffering, the frame would have been stored and the GPU would have stared to work on the next frame 11ms earlier and by the way making it more likely to finish the next frame within the next 33ms.
Using technique I (tried to) described few stresses may end perceived as dropped frames. I say perceived because a dropped frame should consist of a frame displayed four times successively but "half frames" missed will add up and may be perceived by end user.
 
Last edited by a moderator:
But my pov of them are that they did triple buffering to get on par with the X360 that did double buffer and v-sync.
So basically with the PS3 running just double buffering and v-sync the same game had more screen tearing than the X360 version ie they had to do triple to get it to acceptable levels.

Or am I way of mark now?
Basically there are no frame torn in both cases, it's more about how many frame are dropped. Actually standard double buffering + v-synch for my understanding is more prone to drop frame than a renderer using triple buffering.
 
Last I remember he was applying for job as a PS3 programmer, so he can be all ga ga about having to write for the X360.



XNA is the same dev tools that the pro shops use to develop for the X360?



English ain't my native language, so not sure what is exactly the point of those sentences.

But my pov of them are that they did triple buffering to get on par with the X360 that did double buffer and v-sync.
So basically with the PS3 running just double buffering and v-sync the same game had more screen tearing than the X360 version ie they had to do triple to get it to acceptable levels.

Or am I way of mark now?

I don't want to persist again with this discussion but again it isn't exactly true: there are games on the 360 who have the same amount of tearing but without triple buffer. It's lack not depend only of 'better' double buffer choice. Mazinger has just said why and there is link. ;)
 
Last edited by a moderator:
I'd just like to know what this pseudo triple buffer is. :)


There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).

In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.

Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).

Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.

When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.

So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.


http://www.anandtech.com/video/showdoc.aspx?i=3591&p=4


This explains all, especially with the joker's case ;)
 
Well it doesn't explain much.
Render ahead and "triple buffering" work the same way when your framerate <= refreshrate which is almost always the case for console games.

Also why would MS enforce oldest (queue) as opposed to newest page flipping on 360? I'm assuming queue size is in fact static, as the opposite would be stupid at least on a console environment.
 
Well it doesn't explain much.
Render ahead and "triple buffering" work the same way when your framerate <= refreshrate which is almost always the case for console games.

Also why would MS enforce oldest (queue) as opposed to newest page flipping on 360? I'm assuming queue size is in fact static, as the opposite would be stupid at least on a console environment.
:???: Why not much? It's simple & clear enough: 'Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).'
It isn't an explaination? MS prefers the smoothness to the triple buffering.
 
First of all, I don't see any 360 reference, the quote talks about DirectX only.
Second, if you have a fixed sized queue implementation, providing an option for "semi-stack" instead should not require rocket science, especially for size 3.

Third, none of those explain why we don't see triple buffering on 360, because even if render ahead option is available, it would still look like triple buffering for most cases (with vsync).
 
betan said:
Render ahead and "triple buffering" work the same way when your framerate <= refreshrate which is almost always the case for console games.
Forget console games - why would anyone want to use anything more then double-buffering when framerate >= refresh in the first place?
 
Forget console games - why would anyone want to use anything more then double-buffering when framerate >= refresh in the first place?

I think the point of having a queue instead is to allow for unstable framerates which still averages less than (or equal to) refresh rate. Maybe some (possibly non-game) applications need that.
 
First of all, I don't see any 360 reference, the quote talks about DirectX only.
Second, if you have a fixed sized queue implementation, providing an option for "semi-stack" instead should not require rocket science, especially for size 3.

Third, none of those explain why we don't see triple buffering on 360, because even if render ahead option is available, it would still look like triple buffering for most cases (with vsync).

I don't understand where is the difference between 360 and its libraries :???:
 
I don't understand where is the difference between 360 and its libraries :???:
Well the 360 API is neither directx9,10 or 11 neither direct3D. But it's likely that MS didn't take a different route for the 360. Who knows without more insight. In the thread about about XNA the guy hints at devs have not really access to a real triple buffering implementation where as the described technique qualifies as triple buffering.
On top of that if you consider their implementation for an equal number of back buffer as "standard" triple buffering it doesn't make much difference when your render outputs more or less one frame every 33ms. Frames get dropped even if description we have is unclear. It may explain why some games on the 360 get out of a perceived v-synch mode some times. Ms is not dumb enough whether it's on the 360 ot the pc to let back buffers accumulate and lag input grow, if you don't do anything and your render is slow you would end with your RAM full back buffers and latency would reach full seconds .
"standard" triple buffering seems a more cleverer approach but when devs aim @30fps the difference is minimal in regard to input as the render is unlikely to generate enough frame within the display refresh interval to make any difference (it's different when your renders outpt a lot of frame per second more than actual refresh rate of your display).

once again "standard" triple buffering seems the cleverest approach. Why Ms didn't implement it? Mystery, patent issues (part of openGL they can't blatantly copy?)? no pressure to do so? Who Know, certainly not hint at a hardware problem as their implementation have the same requirement as "standard" triple buffering it's just way less elegant.
 
Well the 360 API is neither directx9,10 or 11 neither direct3D. But it's likely that MS didn't take a different route for the 360. Who knows without more insight. In the thread about about XNA the guy hints at devs have not really access to a real triple buffering implementation where as the described technique qualifies as triple buffering.
On top of that if you consider their implementation for an equal number of back buffer as "standard" triple buffering it doesn't make much difference when your render outputs more or less one frame every 33ms. Frames get dropped even if description we have is unclear. It may explain why some games on the 360 get out of a perceived v-synch mode some times. Ms is not dumb enough whether it's on the 360 ot the pc to let back buffers accumulate and lag input grow, if you don't do anything and your render is slow you would end with your RAM full back buffers and latency would reach full seconds .
"standard" triple buffering seems a more cleverer approach but when devs aim @30fps the difference is minimal in regard to input as the render is unlikely to generate enough frame within the display refresh interval to make any difference (it's different when your renders outpt a lot of frame per second more than actual refresh rate of your display).

once again "standard" triple buffering seems the cleverest approach. Why Ms didn't implement it? Mystery, patent issues (part of openGL they can't blatantly copy?)? no pressure to do so? Who Know, certainly not hint at a hardware problem as their implementation have the same requirement as "standard" triple buffering it's just way less elegant.

Nothing is sure :rolleyes: ; there are a lot of variables implied when you develop a game, who change for different reason. I really doubt a simple patent issue is the reason. I don't understand why is it so absurd to believe 360 could have more 'difficult' in the triple buffer because 'in theory' it isn't, when the theory exposed not consider more others implications. The quote of Mazinger explain clearly MS prefers the smoothness to the lag in primis, so could be don't give an excellent support to triple buffer for that reason. Not means however the triple buffer is impossible. The discussion have origin to ps3 'vs' 360 vsync; so we don't talking of the impossibility to 360 to have triple buffer but why on the ps3 is more present. Joker has said because on the ps3 the tearing would be more, but it isn't exactly so. 'Probably' ps3 has some implication to get easily triple buffer compared to 360. Something is not discussed yet because the conversation prevails to sustain why the 360 is more suitable, indeed to understand too, whether there is a possible reason who ps3 could have more chance to have triple buffer, in others ways.:???:
 
Last edited by a moderator:
PlayStation 3 - Physics Effects SDK
At CEDEC 2009, Sony Computer Entertainment exhibited the Physics Effects SDK. The Physics Effects SDK is a physics simulation engine optimized for the Cell Broadband Engine, which is provided as part of the PlayStation 3 SDK. Because the Physics Effects SDK is optimized for PlayStation 3, it can do physics simulations very fast, even though they are considered to impose a very high computing load. This could expand developers' scope to include even more complex physical phenomena and mechanisms in games.
http://www.diginfo.tv/2009/09/11/09-0271-r-en.php
 
What does this offer that other existing physics engines don't? The >1ms for a small rope seemed worringly slow to me, but I'm guessing there's a PPE overhead of, say, 1ms and subsequent physics interations add very slightly to that, so large-scale physical interactions are possible.

Still, all they showed was ragdoll physics and basic constraints at work. Nothing physics engines haven't been doing for years. Considering Havok and others are talking procedural behavioural physics, why have Sony gone to this effort?
 
Back
Top