Swiss scientists want to make long AI-generated videos even better by preventing them from 'degrading into randomness' - is that a good idea? I am not so sure
By Efosa Udinmwen published 21 hours ago
EPFL researchers teach AI to correct its own video mistakes
AI-generated videos often lose coherence over time due to a problem called drift
Models trained on perfect data struggle when handling imperfect real-world input
EPFL researchers developed retraining by error recycling to limit progressive degradation
AI-generated videos often lose coherence as sequences grow longer, a problem known as drift. This issue occurs because each new frame is generated based on the previous one, so any small error, such as a distorted object or slightly blurred face, is amplified over time.
Large language models trained exclusively on ideal datasets struggle to handle imperfect input, which is why videos usually become unrealistic after a few seconds.
Recycling errors to improve AI performance
Generating videos that maintain logical continuity for extended periods remains a major challenge in the field. Now, researchers at EPFL's Visual
Intelligence for Transportation (VITA) laboratory have introduced a method called retraining by error recycling.
Unlike conventional approaches that try to avoid errors, this method deliberately feeds the AI's own mistakes back into the training process. By doing so, the model learns to correct errors in future frames, limiting the progressive degradation of images.
The process involves generating a video, identifying discrepancies between produced frames and intended frames, and retraining the AI on these discrepancies to refine future output.
Current AI video systems typically produce sequences that remain realistic for less than 30 seconds before shapes, colors, and motion logic deteriorate.
By integrating error recycling, the EPFL team has produced videos that resist drift over longer durations, potentially removing strict time constraints on generative video.
This advancement allows AI systems to create more stable sequences in applications such as simulations, animation, or automated visual storytelling.
Although this approach addresses drift, it does not eliminate all technical limitations. Retraining by recycling errors increases computational demand and may require continuous monitoring to prevent overfitting to specific mistakes. Large-scale deployment may face resource and efficiency constraints, as well as the need to maintain consistency across diverse video content.
Whether feeding AI its own errors is truly a good idea remains uncertain, as the method could introduce unforeseen biases or reduce generalization in complex scenarios.
The development at VITA Lab shows that AI can learn from its own errors, potentially extending the time limits of video generation.
However, how this method will perform outside controlled testing or in creative applications remains unclear, which suggests caution before assuming it can fully solve the drift problem.
Via TechXplore
https://www.techradar.com/pro/swiss-scientists-want-to-make-long-ai-generated-v ideos-even-better-by-preventing-them-from-degrading-into-randomness-is-that-a-g ood-idea-i-am-not-so-sure
$$
--- SBBSecho 3.28-Linux
* Origin: Capitol City Online (1:2320/105)