IJCAI 2025

Workshop on Deepfake Detection, Localization, and Interpretability

IJCAI 2025 | August 16, 2025 | Montreal/Guangzhou

📄 News

5 Mar: The Deepfake Detection, Localization and Interpretability Workshop and Competition has been accepted by IJCAI 2025.

16 Apr: The paper submission site is online: https://chairingtool.com/conferences/ijcai25-w04/main-track?role=author. Please check the Call for Papers section for related research topics.

More news will be added here shortly.

📄 Description

This workshop is held to address the new security challenges brought about by the rapid development of AI-generated content technology. As Deepfake technology expands from single face synthesis to multimodal content generation, the reliability of existing detection methods in real complex scenarios faces severe challenges.

Although traditional black box classification models can output probability judgments, they cannot provide a visual chain of evidence that convinces law enforcement agencies. At present, explanatory research based on large language models is still in the exploratory stage, lacking a systematic theoretical framework and evaluation standards.

By bringing together top scholars from around the world to jointly overcome core technical bottlenecks such as multimodal forgery positioning, weakly supervised explanation generation, and cross-scenario generalization detection, this workshop will promote the establishment of a verifiable and traceable Deepfake analysis system.

Aligned with IJCAI’s mission of "AI for Good", this "Deepfake Detection, Localization, and Interpretability" workshop is meticulously designed to promote progress in the AI safety community, particularly emphasizing advancing the state-of-the-art in interpretable Deepfake detection tasks. The workshop will feature a combination of invited talks, paper presentations, and competition results.

Sessions will include:

📄 Call for Papers

We welcome research papers of the following topics, as well as papers detailing the solutions used in our competition. op solutions and best papers will be invited to our workshops for oral presentations. Please go to https://chairingtool.com/conferences/ijcai25-w04/main-track?role=author for paper submission before May 26.

Submission Guidelines

  • Submissions aligned with workshop themes
  • Up to 7 pages (IJCAI 2025 main track format)
  • Single round peer review process
  • Presentation formats: Oral/Best Paper

Key Topics

  • Multimodal Deepfake Detection
  • LLM/VLM-based Methods
  • Interpretability Analysis

Related Research Topics

  • 1. Detection Techniques for Risky AI-Generated Content:
    • ➡ Multi-Scene Deepfake Detection: Covering deepfake data across diverse scenarios, including face manipulation and generative content of natural objects.
    • ➡ Multi-Modal Deepfake Detection: Addressing deepfake detection in text, image, audio, and video modalities.
    • ➡ Safety Risk Content: Identifying generative content containing harmful elements, such as violent, terrorist, or explicit materials.
  • 2. Interpretable Deepfake Detection Methods:
    • ➡ Weakly-/Unsupervised Deepfake Localization: Developing methods for detecting and localizing manipulations with minimal or no supervision.
    • ➡ Interpretable Reasoning Frameworks via Large Models: Leveraging large language models (LLMs) and vision-language models (VLMs) to enhance interpretability in deepfake detection.
    • ➡ Scalable Detection Analysis Agents: Designing agents capable of providing interpretable assessments for complex generative manipulations.
  • 3. Adversarial Attack and Defense for Generative Large Models:
    • ➡ Attack Techniques: Exploring various attack vectors on generative large models, including prompt injections, jailbreak attacks, and fine-tuning manipulations.
    • ➡ Risk-Trigger Mechanisms: Investigating the mechanisms that lead to the generation of risk-inducing content during generative model outputs.
    • ➡ Robust Defense Strategies: Developing effective strategies to ensure the robustness of generative models in real-world applications.
  • 4. Detection Methods for Advanced Face Spoofing Attacks:
    • ➡ Threat Analysis: Understanding how emerging facial spoofing techniques compromise biometric identification systems.
    • ➡ Robust Detection for Physical Attacks: Constructing methods to reliably detect facial spoofing under challenging physical threats.
    • ➡ Joint Detection Against Physical-Digital Attacks: Innovating detection methods for hybrid attacks that combine physical and digital manipulations.
  • 5. Datasets and Evaluation Protocols:
    • ➡ Multi-Modal Spatiotemporal Datasets: Constructing datasets for locating image and video manipulations across time and space in multi-modal contexts.
    • ➡ Safety Risk Assessment Datasets: Building datasets targeted at evaluating the risks posed by generative large models.
    • ➡ Standardized Evaluation Protocols: Defining metrics and protocols to assess the interpretability and effectiveness of detection methods.

🏆 Awards

Award for Each Track:

Award for Paper: