Realtime AI content generation *spawn

Those game clips are generated with Gen 3 which has a max length of 10sec. This is not great for temporal consistency as every 10sec Lara Croft's shirt might turn from woolly to leather to whatever material.

If this type of architecture will ever be used for visual effect in real-time gameplay context it has a lot of maturing ahead of it. The good thing is that game engines already have depth buffers, color etc to ground the video.
If this type of architecture was used I don't think it would be an off-the-shelf model. The devs would train their own model for the game.
 
Back
Top