Publications

A Critical Assessment of Visual Sound Source Localization Models Including Negative Audio

Published in Submitted to ICASSP 2025, 2024

The task of Visual Sound Source Localization (VSSL) involves identifying the location of sound sources in visual scenes, integrating audio-visual data for enhanced scene understanding. Despite advancements in state-of-the-art (SOTA) models, we observe three critical flaws: i) The evaluation of the models is mainly focused in sounds produced by objects that are visible in the image, ii) The evaluation often assumes a prior knowledge of the size of the sounding object, and iii) No universal threshold for localization in real-world scenarios is established, as previous approaches only consider positive examples without accounting for both positive and negative cases. In this paper, we introduce a novel test set and metrics designed to complete the current standard evaluation of VSSL models by testing them in scenarios where none of the objects in the image corresponds to the audio input, i.e. a negative audio. We consider three types of negative audio: silence, noise and offscreen. Our analysis reveals that numerous SOTA models fail to appropriately adjust their predictions based on audio input, suggesting that these models may not be leveraging audio information as intended. Additionally, we provide a comprehensive analysis of the range of maximum values in the estimated audio-visual similarity maps, in both positive and negative audio cases, and show that most of the models are not discriminative enough, making them unfit to choose a universal threshold appropriate to perform sound localization without any a priori information of the sounding object, that is, object size and visibility.

Recommended citation: Xavier Juanola, Gloria Haro, Magdalena Fuentes. (2024). "A Critical Assessment of Visual Sound Source Localization Models Including Negative Audio." Submitted to ICASSP 2025 https://arxiv.org/pdf/2410.01020

A Brief Analysis of SLAVC method for Sound Source Localization

Published in Image Processing On Line (IPOL), 2023

Mo and Morgado introduced in 2022 a novel self-supervised learning approach for Visual Sound Source Localization, denoted as SLAVC [13]. The proposed method is based on multiple- instance contrastive learning. In addition to improving the results of previous methods, it also solves two critical problems that former methods faced: 1) excessive overfitting despite training on extensive datasets, 2) tendency to hallucinate sound sources even when no visual evidence to support it in the video. In this paper, we briefly present the method, offer an online executable version allowing the users to test it on their own image-audio pairs and propose some improvements that could benefit the model as future work.

Recommended citation: Xavier Juanola, Gloria Haro. (2023). "A Brief Analysis of SLAVC method for Sound Source Localization." Image Processing On Line http://www.ipol.im/pub/art/2024/525/article.pdf