The AC task is to assign a single activity class label to each video clip from a set of predefined classes and provide a confidence score. The system will be presented nominally 3-second trimmed video clips, each clip containing one of the activity classes and the AC system must output the activity class and a confidence score with higher values indicating the clip is more likely to contain that activity.
The TAD task is to automatically detect and temporally localize each activity instance in untrimmed video. The system will be presented nominally 45-second,untrimmed video clips, where no activity instances overlap during the same duration. For each detected activity instance, the TAD system must output the activity class, a start frame, end frame and confidence score with higher values indicating the instance is more likely to have occured.
Please refer to the evaluation plan below for the detailed tasks and relevant metrics.
|May 5, 2023||TAD/AC evaluation leaderboard opens|
| September 15
||TAD/AC submission deadline|
| September 18
||Challenge Winner notification|
| September 21
||TAD/AC final report available|
|October 2, 2023||2nd Fine Grained Activity Detection (ICCV FGAD'23) Workshop at ICCV23|
CAP contains a new annotated dataset of fine-grained and coarse-grained activity classes of consented people, curated using the Visym Collector platform.
The CAP training and validation dataset are available for public download in the form of a single archive file containing videos and metadata at the CAP dataset distribution site. The CAP dataset site includes a dataset explorer, dataset summary and activity definitions.
To participate in the evaluation, a researcher must create a user account, complete the agreements to access the data, register for participation in the evaluation track, and then make system output submission. These and other topics are covered in the Help Center.
In order to participate, a user account is required. Signing up for an account is an easy two step process using email-confirmation explained in the User Account Help Center Section. The help center additionally shows how to reset a lost password or unlock the account.
After creating an account and signing into the participation dashboard, please follow the registration workflow on the left side in order to obtain access to the data.
In the next workflow step please select which track to participate in (AC/TAD)
System output submission to the leaderboard must be made through the web-platform using the submission instructions described on the webpage (Submission Management Help Topic). To prepare your submission, you will first make .tar file of your system output CSV file via the UNIX command ‘tar cvzf [submission-name].tgz [submission-file-name].csv’ and then make your submission as follows:
The OpenFADScorer package contains an output format checker that validates the submission. To validate your system output locally please use the following command-line:
All clips must be processed independently of each other within a given task and across all tasks, meaning content extracted from the video clip data must not affect the processing of another clip.
Each team may make one scoreable submission per day, evaluate up to 4 systems, and submit up to 100 total submissions.
While participants may report their own results, participants may not make advertising claims about their standing in the evaluation, regardless of rank, or winning the evaluation, or claim NIST endorsement of their system(s). The following language in the U.S. Code of Federal Regulations (15 C.F.R. § 200.113)14 shall be respected: NIST does not approve, recommend, or endorse any proprietary product or proprietary material. No reference shall be made to NIST, or to reports or results furnished by NIST in any advertising or sales promotion which would indicate or imply that NIST approves, recommends, or endorses any proprietary product or proprietary material, or which has as its purpose an intent to cause directly or indirectly the advertised product to be used or purchased because of NIST test reports or results.
At the conclusion of the evaluation, NIST may generate a report summarizing the system results for conditions of interest. Participants may publish or otherwise disseminate these charts, unaltered and with appropriate reference to their source.