Open Fine Grained Activity Detection Challenge (OpenFAD)

  • A 1/2-day 2nd Workshop on Fine Grained Activity Detection to be held on October 2, 2023 at the ICCV’23 in Paris, France!
  • The leaderboard competition has ended. Congratulations goes to:
    • Our co-leaders on the Activity Classification Task: Cloudwalk Technology and DeepGlint both with a 0.920 mAP.
    • Our leader on the Temporal Activity Detection Task: Waseda University, Meisei University, and SoftBank Corporation with a mAP of 0.212.
    Each of the three teams will be given a 20-minute speaking slot at the 2nd International Workshop on Fine Grained Activity Detection (ICCV FGAD’23)
The Open Fine-grained Activity Detection (OpenFAD) is an activity classification and detection evaluation to measure how well systems can automatically classify or temporally detect fine-grained activities collected from Consented Activities of People (CAP) using handheld devices. The evaluation will culminate with the 2nd Workshop on Fine Grained Activity Detection to be held on October 2, 2023 at the ICCV’23 in Paris, France!
  • Activity Classification (AC)

    The AC task is to assign a single activity class label to each video clip from a set of predefined classes and provide a confidence score. The system will be presented nominally 3-second trimmed video clips, each clip containing one of the activity classes and the AC system must output the activity class and a confidence score with higher values indicating the clip is more likely to contain that activity.

  • Temporal Activity Detection (TAD)

    The TAD task is to automatically detect and temporally localize each activity instance in untrimmed video. The system will be presented nominally 45-second,untrimmed video clips, where no activity instances overlap during the same duration. For each detected activity instance, the TAD system must output the activity class, a start frame, end frame and confidence score with higher values indicating the instance is more likely to have occured.

    Please refer to the evaluation plan below for the detailed tasks and relevant metrics.

Evaluation Plan
OpenFAD is open worldwide; we invite all organizations to submit their system results to the OpenFAD leaderboards. The challenge provides a set of data (e.g., training, validation, test sets) to participants to train and run a system on their own hardware platform and submit their system outputs to a web-based leaderboard for displaying scoring results.
To take part in the OpenFAD challenge you need to register on this website and complete the data license to download the data. Once your system is functional, you will be able to upload your system outputs to the challenge website and see your scoring results displayed on the leaderboard. Please refer to Instructions for the details.
If you have any question, please email to the OpenFAD team:
2023 OpenFAD Schedule
Date Events
May 5, 2023 TAD/AC evaluation leaderboard opens
September 15 August 31, 2023 (23:59 EDT) TAD/AC submission deadline
September 18 September 5, 2023 Challenge Winner notification
September 21 September 18, 2023 TAD/AC final report available
October 2, 2023 2nd Fine Grained Activity Detection (ICCV FGAD'23) Workshop at ICCV23
Cap Dataset Page Image
Consented Activities of People (CAP)
Public CAP Dataset Access

CAP contains a new annotated dataset of fine-grained and coarse-grained activity classes of consented people, curated using the Visym Collector platform.

The CAP training and validation dataset are available for public download in the form of a single archive file containing videos and metadata at the CAP dataset distribution site. The CAP dataset site includes a dataset explorer, dataset summary and activity definitions.

Evaluation Participation Instructions

To participate in the evaluation, a researcher must create a user account, complete the agreements to access the data, register for participation in the evaluation track, and then make system output submission. These and other topics are covered in the Help Center.

Signing Up for a User Account

In order to participate, a user account is required. Signing up for an account is an easy two step process using email-confirmation explained in the User Account Help Center Section. The help center additionally shows how to reset a lost password or unlock the account.

Access to the Evaluation Dataset

After creating an account and signing into the participation dashboard, please follow the registration workflow on the left side in order to obtain access to the data.

  1. In a first step create a Site, which represents your point of contact for data licensing. See Creating and Joining Sites Help Section.
  2. In the next step obtain and sign both: the evaluation and dataset license agreement. The evaluation agreement is a checkbox while the dataset license agreement is a PDF document which needs to be: downloaded, filled out, scanned, uploaded and will be validated by the OpenFAD22 license liaison. See Managing Licenses Help Section.
  3. After the licensing access has been established (requiring evaluation staff review), the dataset section on the bottom right of the dashboard will be pointing to a download page.
Register for Track Participation

In the next workflow step please select which track to participate in (AC/TAD)

How To Submit System Output

System output submission to the leaderboard must be made through the web-platform using the submission instructions described on the webpage (Submission Management Help Topic). To prepare your submission, you will first make .tar file of your system output CSV file via the UNIX command ‘tar cvzf [submission-name].tgz [submission-file-name].csv’ and then make your submission as follows:

  1. Navigate to your “Dashboard”
  2. Under “Submission Management”, click your task
  3. Add a new “System” or use an existing system
  4. Click on “Upload”
  5. Fill in the form and click “Submit”
How To Validate

The OpenFADScorer package contains an output format checker that validates the submission. To validate your system output locally please use the following command-line:

  • fad-scorer validate-class -r -AC_index.csv -y system_output.csv
  • fad-scorer validate-det -r -TAD_index.csv -y system_output.csv

All clips must be processed independently of each other within a given task and across all tasks, meaning content extracted from the video clip data must not affect the processing of another clip.

Each team may make one scoreable submission per day, evaluate up to 4 systems, and submit up to 100 total submissions.

While participants may report their own results, participants may not make advertising claims about their standing in the evaluation, regardless of rank, or winning the evaluation, or claim NIST endorsement of their system(s). The following language in the U.S. Code of Federal Regulations (15 C.F.R. § 200.113)14 shall be respected: NIST does not approve, recommend, or endorse any proprietary product or proprietary material. No reference shall be made to NIST, or to reports or results furnished by NIST in any advertising or sales promotion which would indicate or imply that NIST approves, recommends, or endorses any proprietary product or proprietary material, or which has as its purpose an intent to cause directly or indirectly the advertised product to be used or purchased because of NIST test reports or results.

At the conclusion of the evaluation, NIST may generate a report summarizing the system results for conditions of interest. Participants may publish or otherwise disseminate these charts, unaltered and with appropriate reference to their source.