Leaderboard submissions for the Activity-Classification task are now closed. Thanks to all participating teams !

Open Fine Grained Activity Detection Challenge (OpenFAD)

The Open Fine-grained Activity Detection (OpenFAD) is an activity classification and detection evaluation to measure how well systems can automatically classify or temporally detect fine-grained activities collected from Consented Activities of People (CAP) using handheld devices.
  • Activity Classification (AC)

    The AC task is to assign a single activity class label to each video clip from a set of predefined classes and provide a confidence score. The system will be presented nominally 3-second trimmed video clips, each clip containing one of the activity classes and the AC system must output the activity class and a confidence score with higher values indicating the clip is more likely to contain that activity.

  • Temporal Activity Detection (TAD)

    The TAD task is to automatically detect and temporally localize each activity instance in untrimmed video. The system will be presented nominally 45-second,untrimmed video clips, where no activity instances overlap during the same duration. For each detected activity instance, the TAD system must output the activity class, a start frame, end frame and confidence score with higher values indicating the instance is more likely to have occured.

    Please refer to the evaluation plan below for the detailed tasks and relevant metrics.

Evaluation Plan
OpenFAD is open worldwide; we invite all organizations to submit their system results to the OpenFAD leaderboards. The challenge provides a set of data (e.g., training, validation, test sets) to participants to train and run a system on their own hardware platform and submit their system outputs to a web-based leaderboard for displaying scoring results.
To take part in the OpenFAD challenge you need to register on this website and complete the data license to download the data. Once your system is functional, you will be able to upload your system outputs to the challenge website and see your scoring results displayed on the leaderboard. Please refer to Instructions for the details.
If you have any question, please email to the OpenFAD team:
2022 OpenFAD Schedule
Date Events
May 2, 2022 OpenFAD evaluation plan available
Call for participation in AC challenge
AC training/validation set announcement
September 5, 2022 AC leaderboard opens
December 2, 2022 AC submission deadline
December 15, 2022 AC final report available (top-performers notification)
January 7, 2023 Workshop on Fine Grained Activity Detection (FGAD'23) at WACV2023
Cap Dataset Page Image
Consented Activities of People (CAP)
Public CAP Dataset Access

CAP contains a new annotated dataset of fine-grained and coarse-grained activity classes of consented people, curated using the Visym Collector platform.

The CAP training and validation dataset are available for public download in the form of a single archive file containing videos and metadata at the CAP dataset distribution site. The CAP dataset site includes a dataset explorer, dataset summary and activity definitions.

Signing Up

In order to participate an user account is required. Signing up for an account is an easy two step process using email-confirmation explained here. The help center additionaly shows how to reset a lost password or unlock the account.

Access to the Evaluation Dataset

After creating an account and signing into the participation dashboard, please follow the registration workflow on the left side in order to obtain access to the data.

  1. In a first step create a Site, which represents your point of contact. Detailed instructions.
  2. In the next step obtain and sign both: the evaluation and dataset licence agreement. The evaluation agreement is a checkbox while the dataset license agreement is a PDF document which needs to be downloaded, filled out, scanned, uploaded and will be validated by the OpenFAD22 license liaison. Detailed instructions..
  3. After the licensing access has been established the dataset section on the bottom right of the dashboard will be pointing to a download page.
Register for Track Participation

In the next workflow step please select which track to participate in (AC/TAD)

How To Submit System Output

System output submission to the leaderboard must be made through the web-platform using the submission instructions described on the webpage (Submission Management). To prepare your submission, you will first make .tar file of your system output CSV file via the UNIX command ‘tar cvzf [submission-name].tgz [submission-file-name].csv’ and then make your submission as follows:

  1. Navigate to your “Dashboard”
  2. Under “Submission Management”, click your task
  3. Add a new “System” or use an existing system
  4. Click on “Upload”
  5. Fill in the form and click “Submit”
How To Validate

The OpenFADScorer package (to be public soon) contains an output format checker that validates the submission. To validate your system output locally please use the following command-line:

  • fad-scorer validate-class -r -AC_index.csv -y system_output.csv
  • fad-scorer validate-det -r -TAD_index.csv -y system_output.csv

All clips must be processed independently of each other within a given task and across all tasks, meaning content extracted from the video clip data must not affect the processing of another clip.

While participants may report their own results, participants may not make advertising claims about their standing in the evaluation, regardless of rank, or winning the evaluation, or claim NIST endorsement of their system(s). The following language in the U.S. Code of Federal Regulations (15 C.F.R. § 200.113)14 shall be respected: NIST does not approve, recommend, or endorse any proprietary product or proprietary material. No reference shall be made to NIST, or to reports or results furnished by NIST in any advertising or sales promotion which would indicate or imply that NIST approves, recommends, or endorses any proprietary product or proprietary material, or which has as its purpose an intent to cause directly or indirectly the advertised product to be used or purchased because of NIST test reports or results.

At the conclusion of the evaluation, NIST may generate a report summarizing the system results for conditions of interest. Participants may publish or otherwise disseminate these charts, unaltered and with appropriate reference to their source.