Yang and Hong-Hanh have a paper entitled “FreDA: Training-Free Test-Time Adaptation for Deepfake Detection via Non-Parametric Cache Retrieval” accepted for presentation at the 17th EAI International Conference on Digital Forensics & Cyber Crime (ICDF2C-2026), Reykjavik, Iceland, Sept. 2026

Well-done Yang and Hong-Hanh! This is Yang’s first paper with our ASEADOS group.

Abstract:

Deepfake (DF) detectors face significant challenges when deployed in real-world environments, particularly when encountering test samples deviated from training data through either distribution shifts or post-processing manipulations. To address these challenges, we propose FreDA (Free Detection Adaptation), a novel training-free test-time adaptation method that enhances the adaptability of detectors during inference without requiring any gradient computation or parameter updates. Our key idea is to leverage a non-parametric cache retrieval mechanism tailored to binary classification, enabling the model to dynamically incorporate distributional information from a small reference set via a similarity-based lookup and residual connection. We also introduce a fine-tuned variant, FreDA-F, which optionally refines cached keys with minimal training to further optimize performance under limited computational budgets. Empirically, our method demonstrates superior transferability and resilience against various perturbations across multiple benchmarks, including FaceForensics++, Celeb-DF-v1, and DFDCP. Compared to traditional fine-tuning paradigms, FreDA provides a parameter-efficient and highly adaptable framework that significantly enhances the generalization of deepfake detectors in diverse, real-world scenarios.