Critical Assessment of VSSL Models

A Critical Assessment of Visual Sound Source Localization Models Including Negative Audio

Xavier Juanola

xavier.juanola@upf.edu

Universitat Pompeu Fabra,
Barcelona, Spain

Gloria Haro

gloria.haro@upf.edu

Universitat Pompeu Fabra,
Barcelona, Spain

Magdalena Fuentes

mf3734@nyu.edu

New York University,
New York City, USA

Paper accepted to ICASSP 2025

[Paper]      [Code]


Abstract

The task of Visual Sound Source Localization (VSSL) involves identifying the location of sound sources in visual scenes, integrating audio-visual data for enhanced scene understanding. Despite advancements in state-of-the-art (SOTA) models, we observe three critical flaws: i) The evaluation of the models is mainly focused on sounds produced by objects that are visible in the image, ii) The evaluation often assumes prior knowledge of the size of the sounding object, and iii) No universal threshold for localization in real-world scenarios is established, as previous approaches only consider positive examples without accounting for both positive and negative cases. In this paper, we introduce a novel test set and metrics designed to complete the current standard evaluation of VSSL models by testing them in scenarios where none of the objects in the image correspond to the audio input, i.e., a negative audio. We consider three types of negative audio: silence, noise, and offscreen. Our analysis reveals that numerous SOTA models fail to appropriately adjust their predictions based on audio input, suggesting that these models may not be leveraging audio information as intended. Additionally, we provide a comprehensive analysis of the range of maximum values in the estimated audio-visual similarity maps in both positive and negative audio cases, showing that most of the models are not discriminative enough, making them unfit to choose a universal threshold appropriate to perform sound localization without any a priori information about the sounding object, such as object size and visibility.


Results

The following figure shows a comparison between the visualization of the localization map used in previous publications and the one proposed in this paper. The first row represents the old visualization, which normalizes the cosine similarity between the audio and image features and overlays the localization map over the original image. To show the effect of the Universal threshold we set to zero the audio-visual similarity values below that threshold. The resulting values are then normalized to the range [0, 1] and they are combined with the original image.

Figure 1: Comparison between the visualization used in previous methods and the one proposed in this publication using the New Universal Threshold.

The following figures show the localization maps of the different models on the male_ukulele_9253_male, male_sheep_9215 and trumpet_acousticguitar_6274 images and the different audios (Male, Ukulele, Sheep, Accoustic Guitar, Trumpet, Silence, Noise, and Offscreen).

Figure 4: Model predictions for the images male_ukulele_9253_male, male_sheep_9215 and trumpet_acousticguitar_6274 from the IS3 Dataset from the different models evaluated in the paper.

Acknowledgements

The authors acknowledge support by the FPI scholarship PRE2022-101321, Maria de Maeztu CEX2021-001195-M/AEI/ 10.13039/501100011033, MICINN/FEDER UE project ref. PID2021-127643NB-I00, Fulbright Program, and Ministerio de Universidades (Spain) for mobility stays of professors and researchers in foreign higher education and research centers.

María de Maeztu Ministerio de Ciéncia e Innovación Fulbright Ministerio de Universidades
Universitat Pompeu Fabra MARL New York University