Efficient Video to Audio Mapper with Visual Scene Detection

Duke Kunshan University
Submitted to ICASSP 2025

Abstract

Video-to-audio (V2A) generation aims to produce corresponding audio given silent video inputs. This task is particularly challenging due to the cross-modality and sequential nature of the audio-visual features involved. Recent works have made significant progress in bridging the domain gap between video and audio, generating audio that is semantically aligned with the video content. However, a critical limitation of these approaches is their inability to effectively recognize and handle multiple scenes within a video, often leading to suboptimal audio generation in such cases. In this paper, we first reimplement a state-of-the-art V2A model with a slightly modified light-weight architecture, achieving results that outperform the baseline. We then propose an improved V2A model that incorporates a scene detector to address the challenge of switching between multiple visual scenes. Results on VGGSound show that our model can recognize and handle multiple scenes within a video and achieve superior performance against the baseline for both fidelity and relevance.

Architecture

Architecture Image

V2A Generation with Video of Multiple Scenes

Ground Truth

Ours

V2A-Mapper

V2A Generation for AI generated videos (videos generated by Kling)

BibTeX

@misc{yi2024efficientvideoaudiomapper,
        title={Efficient Video to Audio Mapper with Visual Scene Detection}, 
        author={Mingjing Yi and Ming Li},
        year={2024},
        eprint={2409.09823},
        archivePrefix={arXiv},
        primaryClass={cs.SD},
        url={https://arxiv.org/abs/2409.09823}, 
  }