SAFE: Image Edit Detection and Localization Challenge 2025
Hosted by UL Research Institutes โ Digital Safety Research Institute (DSRI) Co-located with the SynRDinBAS: Synthetic Realities and Data in Biometric Analysis and Security Workshop @ WACV 2026
๐ All participants are required to register: Google Form link ๐ง Contact: safe-challenge-2025@dsri.org ๐ฌ Join our Discord: Discord Server ๐ค HuggingFace community ๐ Entry submission webform: [๐ซ Opens: week of November 17, 2025]
๐ฃ Updates
2025-11-17: The HuggingFace community for the challenge is now open. The HF community includes an example submission repository and a pilot task dataset.
๐ Overview
The SAFE: Image Edit Detection and Localization Challenge 2025 focuses on bridging the research gap in detecting partially synthetic images. Participants will compete to create the best-performing synthetic image detectors as measured on novel, private datasets.
Localized changesโsuch as subtle face edits, background inpainting, or object insertions/removalsโare harder to detect and more likely to deceive viewers than fully synthetic content.
Target tasks include:
- Detection of images generated by state-of-the-art generative models (image/text-to-image, diffusion, etc.)
- Detection & localization of object additions and removals
- Identification of inpainted or altered regions
- Image classification: authentic vs. partially synthetic vs. fully synthetic
By anchoring the challenge in subtle manipulations, we aim to drive innovation in detection methods that move beyond binary detection toward fine-grained reasoning about where and how an image was altered.
New datasets will be generated specifically for this challenge, with a substantial portion synthetically manipulated at varying granularities and annotated for both detection and localization.
โ Blind Evaluation: This challenge uses novel, private datasets for evaluation. To maximize the validity of the performance measurements:
- No training data will be provided. Participants may train on any data they have rights to use.
- Evaluation data will not be published. Organizers will publish general descriptions of the collection methodology only.
- Small data samples are provided only to validate submission flow and are not representative of the actual evaluation data content.
Submission Platform: Evaluations for this challenge will run on the Dyff Platform hosted by UL DSRI.
Primary Metrics: Detection (Accuracy, Balanced Accuracy, AUC) and Localization (IoU, pixel-level F1).
Recognition: Top performers may be eligible for research grants and travel support to WACV for invited teams.
๐ Schedule
| Event | Date | Status |
|---|---|---|
| Starter project released | November 17, 2025 | HuggingFace community |
| Pilot Task Opens | week of November 17, 2025 | ๐ซ Not Yet Open |
| Task 1 Opens | December 1, 2025 | ๐ซ Not Yet Open |
| End of Evaluation Phase | February 27, 2026 | โณ |
| Workshop Session & Results @ WACV | March 6โ10, 2026 | ๐ SynRDinBAS Workshop |
Dates subject to change; final timelines will be posted.
๐ง Challenge Tasks
This is a script-based competition โ your model repository runs on our infrastructure on private data and must output its predictions in a standard format.
๐ Pilot Task (๐ซ Not Yet Open): System Testing and Competition Design Validation
The Pilot Task allows participants to test the submission process, system behavior, and evaluation flow before the release of official competition tasks. It is not intended to reflect the complexity, realism, or diversity of the final datasets.
Goals:
- Verify submission pipeline (upload, evaluation, logging, etc.)
- Allow organizers to validate infrastructure and parameters
- Familiarize teams with submission mechanics before full competition
๐ฏ Task 1 (๐ซ Not Yet Open)
Details coming soon.
๐ค Participant Instructions
Registration
๐ All participants are required to register: Google Form link
The principal investigator of each participating team should fill out the registration web form. The challenge organizers must manually approve your registration. After approval, you will receive an access token for the Dyff Platform that will allow you to manage your team information, make submissions, and review non-public results and system logs.
Submissions
๐ Entry submission webform: [๐ซ Opens: November 17, 2025]
Submissions must be in the form of a containerized web service that implements a standard JSON API over HTTP. You will submit a runnable Docker image and, optionally, a volume of data files to be mounted in the running Docker container.
We provide an example submission repository that contains a specification and reference implementation of the required interface as well as step-by-step instructions for creating a new submission.
Teams will be allowed a limited number of submissions per day.
Submitted systems will run in a virtual machine with access to the following resources:
- GPU: 1x Nvidia L4
- CPU: TBD
- Memory: TBD
๐ Evaluation
Submissions will be evaluated on private data created for this challenge, and the evaluations will be run on private computing infrastructure. Evaluation data will not be released publicly. The organizers will publish a description of the dataset creation methodology.
The competition will maintain both a public leaderboard and a private leaderboard.
Datasets may differ between the two to ensure fair and unbiased evaluation.
โ๏ธ Rules
To ensure a fair and rigorous evaluation process for the Synthetic and AI Forensic Evaluations (SAFE) โ Synthetic Image Authenticity Challenge, all participants must adhere to the following:
-
Leaderboard
- Both public and private leaderboards will be maintained.
- The private leaderboard will serve as the basis for final ranking.
-
Submission Limits
- Participants will be limited in the number of daily submissions.
-
Confidentiality
- Participants agree not to publicly compare results with others until those results are published outside the conference venue.
- Participants are free to publish and use their own results independently.
-
Appropriate Use
- Use of provided computing resources for any non-challenge-related purposes is prohibited.
- Participants should take appropriate precautions to protect their authorization credentials (API tokens, etc) and report account compromise or misuse to the challenge organizers immediately.
-
Compliance
- All rules and guidelines issued by the organizers must be followed.
- Failure to comply may result in disqualification or exclusion from future challenges.
By participating in the SAFE Challenge, you agree to uphold these rules and contribute to advancing the field of synthetic image forensics.
๐ Helpful Resources
- ๐ฌ Discord: Invite Link
- ๐ง Email the organizers: safe-challenge-2025@dsri.org
- ๐ฎ Dyff Documentation: https://docs.dyff.io