I think it's ok to discuss. Nobody from my pov said that Joker is dishonest or anything is one of the best thing that happen to this forum his freedom of talk is more than welcome.Tech is fact, not opinion. You can prefer one design over another, but there are only cold hard facts when it comes down to it. Joker wasn't giving his opinion based on experience, he was telling facts based on it.
It doesn't sound like true triple buffering. It sounds pretty much like an API issue. I'm willing to know what Joker or the others devs will have to say about that. Not to mention that this implementation may have the job done properly (whether devs use it or not and why is another issue).Native 360 games have access to a kind of weird not-really-triple-buffering mode that some games use, but which is not available through the XNA Framework.
Maybe I am taking it the wrong way, who knows. This is just too often the case with joker because of his preference for 360 development. It's not like he's been returning to the thread to defend his position.
Additionally you ignore the "facts" that other 360 developers coding on the XNA platform and posting on the XNA forums, seem to be saying something contrary to what Joker initially stated.
If you prefer the 'facts', shows a tendency of ps3 to have triple buffer without problem has the same level of tearing in a 360 double buffer game, so the 'facts' seems to indicate triple buffer is more suitable on the ps3.
I've an idea but I'm not sure it make sense... may be it's based on me misunderstanding something.Tbh, when I found the xna thread, it was just interesting to discover that there was an alternative to triple buffering that the 360 devs have access to. Triple buffering is triple buffering. Whoopedy do. Devs make decisions. What people prefer is not supposed to be the nature of the thread though.
I'd just like to know what this pseudo triple buffer is.
Basically there are no frame torn in both cases, it's more about how many frame are dropped. Actually standard double buffering + v-synch for my understanding is more prone to drop frame than a renderer using triple buffering.But my pov of them are that they did triple buffering to get on par with the X360 that did double buffer and v-sync.
So basically with the PS3 running just double buffering and v-sync the same game had more screen tearing than the X360 version ie they had to do triple to get it to acceptable levels.
Or am I way of mark now?
Last I remember he was applying for job as a PS3 programmer, so he can be all ga ga about having to write for the X360.
XNA is the same dev tools that the pro shops use to develop for the X360?
English ain't my native language, so not sure what is exactly the point of those sentences.
But my pov of them are that they did triple buffering to get on par with the X360 that did double buffer and v-sync.
So basically with the PS3 running just double buffering and v-sync the same game had more screen tearing than the X360 version ie they had to do triple to get it to acceptable levels.
Or am I way of mark now?
I'd just like to know what this pseudo triple buffer is.
There has been a lot of discussion in the comments of the differences between the page flipping method we are discussing in this article and implementations of a render ahead queue. In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).
The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).
In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.
Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).
Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.
When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.
So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.
Why not much? It's simple & clear enough: 'Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).'Well it doesn't explain much.
Render ahead and "triple buffering" work the same way when your framerate <= refreshrate which is almost always the case for console games.
Also why would MS enforce oldest (queue) as opposed to newest page flipping on 360? I'm assuming queue size is in fact static, as the opposite would be stupid at least on a console environment.
Forget console games - why would anyone want to use anything more then double-buffering when framerate >= refresh in the first place?betan said:Render ahead and "triple buffering" work the same way when your framerate <= refreshrate which is almost always the case for console games.
Forget console games - why would anyone want to use anything more then double-buffering when framerate >= refresh in the first place?
http://www.anandtech.com/video/showdoc.aspx?i=3591&p=4
This explains all, especially with the joker's case
First of all, I don't see any 360 reference, the quote talks about DirectX only.
Second, if you have a fixed sized queue implementation, providing an option for "semi-stack" instead should not require rocket science, especially for size 3.
Third, none of those explain why we don't see triple buffering on 360, because even if render ahead option is available, it would still look like triple buffering for most cases (with vsync).
Well the 360 API is neither directx9,10 or 11 neither direct3D. But it's likely that MS didn't take a different route for the 360. Who knows without more insight. In the thread about about XNA the guy hints at devs have not really access to a real triple buffering implementation where as the described technique qualifies as triple buffering.I don't understand where is the difference between 360 and its libraries
Well the 360 API is neither directx9,10 or 11 neither direct3D. But it's likely that MS didn't take a different route for the 360. Who knows without more insight. In the thread about about XNA the guy hints at devs have not really access to a real triple buffering implementation where as the described technique qualifies as triple buffering.
On top of that if you consider their implementation for an equal number of back buffer as "standard" triple buffering it doesn't make much difference when your render outputs more or less one frame every 33ms. Frames get dropped even if description we have is unclear. It may explain why some games on the 360 get out of a perceived v-synch mode some times. Ms is not dumb enough whether it's on the 360 ot the pc to let back buffers accumulate and lag input grow, if you don't do anything and your render is slow you would end with your RAM full back buffers and latency would reach full seconds .
"standard" triple buffering seems a more cleverer approach but when devs aim @30fps the difference is minimal in regard to input as the render is unlikely to generate enough frame within the display refresh interval to make any difference (it's different when your renders outpt a lot of frame per second more than actual refresh rate of your display).
once again "standard" triple buffering seems the cleverest approach. Why Ms didn't implement it? Mystery, patent issues (part of openGL they can't blatantly copy?)? no pressure to do so? Who Know, certainly not hint at a hardware problem as their implementation have the same requirement as "standard" triple buffering it's just way less elegant.
why have Sony gone to this effort?